Born approximation, multiple scattering, and butterfly algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Qiao, Zhijun
2014-06-01
Many imaging algorithms have been designed assuming the absence of multiple scattering. In the 2013 SPIE proceeding, we discussed an algorithm for removing high order scattering components from collected data. In this paper, our goal is to continue this work. First, we survey the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in our target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Atmospheric Science Data Center
2013-04-16
... will misregister because of parallax and therefore the radiance vs. angle should not be smooth. But this algorithm fails for ... product by removing ozone absorption, clear atmosphere (Rayleigh) scattering, and scattering from the retrieved aerosol. These data ...
Closed-loop multiple-scattering imaging with sparse seismic measurements
NASA Astrophysics Data System (ADS)
Berkhout, A. J. Guus
2018-03-01
In the theoretical situation of noise-free, complete data volumes (`perfect data'), seismic data matrices are fully filled and multiple-scattering operators have the minimum-phase property. Perfect data allow direct inversion methods to be successful in removing surface and internal multiple scattering. Moreover, under these perfect data conditions direct source wavefields realize complete illumination (no irrecoverable shadow zones) and, therefore, primary reflections (first-order response) can provide us with the complete seismic image. However, in practice seismic measurements always contain noise and we never have complete data volumes at our disposal. We actually deal with sparse data matrices that cannot be directly inverted. The message of this paper is that in practice multiple scattering (including source ghosting) must not be removed but must be utilized. It is explained that in the real world we badly need multiple scattering to fill the illumination gaps in the subsurface. It is also explained that the proposed multiple-scattering imaging algorithm gives us the opportunity to decompose both the image and the wavefields into order-based constituents, making the multiple scattering extension easy to apply. Last but not least, the algorithm allows us to use the minimum-phase property to validate and improve images in an objective way.
Removal of atmospheric effects from satellite imagery of the oceans.
Gordon, H R
1978-05-15
In attempting to observe the color of the ocean from satellites, it is necessary to remove the effects of atmospheric and sea surface scattering from the upward radiance at high altitude in order to observe only those photons which were backscattered out of the ocean and hence contain information about subsurface conditions. The observations that (1) the upward radiance from the unwanted photons can be divided into those resulting from Rayleigh scattering alone and those resulting from aerosol scattering alone, (2) the aerosol scattering phase function should be nearly independent of wavelength, and (3) the Rayleigh component can be computed without a knowledge of the sea surface roughness are combined to yield an algorithm for removing a large portion of this unwanted radiance from satellite imagery of the ocean. It is assumed that the ocean is totally absorbing in a band of wavelengths around 750 nm and shown that application of the proposed algorithm to correct the radiance at a wavelength lambda requires only the ratio () of the aerosol optical thickness at lambda to that at about 750 nm. The accuracy to which the correction can be made as a function of the accuracy to which can be found is in detail. A possible method of finding from satellite measurements alone is suggested.
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less
A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising
NASA Astrophysics Data System (ADS)
Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua
2018-04-01
In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.
NASA Astrophysics Data System (ADS)
Peña, M.
2016-10-01
Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Image recovery by removing stochastic artefacts identified as local asymmetries
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.
2012-04-01
Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.
Surface reconstruction from scattered data through pruning of unstructured grids
NASA Technical Reports Server (NTRS)
Maksymiuk, C. M.; Merriam, M. L.
1991-01-01
This paper describes an algorithm for reconstructing a surface from a randomly digitized object. Scan data (treated as a cloud of points) is first tesselated out to its convex hull using Delaunay triangulation. The line-of-sight between each surface point and the scanning device is traversed, and any tetrahedra which are pierced by it are removed. The remaining tetrahedra form an approximate solid model of the scanned object. Due to the inherently limited resolution of any scan, this algorithm requires two additional procedures to produce a smooth, polyhedral surface: one process removes long, narrow tetrahedra which span indentations in the surface between digitized points; the other smooths sharp edges. The results for a moderately resolved sample body and a highly resolved aircraft are displayed.
External calibration of polarimetric radar images using distributed targets
NASA Technical Reports Server (NTRS)
Yueh, Simon H.; Nghiem, S. V.; Kwok, R.
1992-01-01
A new technique is presented for calibrating polarimetric synthetic aperture radar (SAR) images using only the responses from natural distributed targets. The model for polarimetric radars is assumed to be X = cRST where X is the measured scattering matrix corresponding to the target scattering matrix S distorted by the system matrices T and R (in general T does not equal R(sup t)). To allow for the polarimetric calibration using only distributed targets and corner reflectors, van Zyl assumed a reciprocal polarimetric radar model with T = R(sup t); when applied for JPL SAR data, a heuristic symmetrization procedure is used by POLCAL to compensate the phase difference between the measured HV and VH responses and then take the average of both. This heuristic approach causes some non-removable cross-polarization responses for corner reflectors, which can be avoided by a rigorous symmetrization method based on reciprocity. After the radar is made reciprocal, a new algorithm based on the responses from distributed targets with reflection symmetry is developed to estimate the cross-talk parameters. The new algorithm never experiences problems in convergence and is also found to converge faster than the existing routines implemented for POLCAL. When the new technique is implemented for the JPL polarimetric data, symmetrization and cross-talk removal are performed on a line-by-line (azimuth) basis. After the cross-talks are removed from the entire image, phase and amplitude calibrations are carried out by selecting distributed targets either with azimuthal symmetry along the looking direction or with some well-known volume and surface scattering mechanisms to estimate the relative phases and amplitude responses of the horizontal and vertical channels.
Pulse stuttering as a remedy for aliased ground backscatter
NASA Astrophysics Data System (ADS)
Bowhill, S. A.
1983-12-01
An algorithm that aides in the removal of ground scatter from low frequency Mesosphere, Stratosphere, Troposphere (MST) radar signals is examined. The unwanted ground scatter is shown as a sequence of velocity plots which are almost typical at the various altitudes. The interpulse period is changed in a cyclic way, thereby destroying the coherence of the unwanted signal. The interpulse period must be changed by an amount at least equal to the transmitted pulse width, and optimum performance is obtained when the number of different interpulse period occupies a time span greater than the coherence time of the unwanted signal. Since a 20-msec pulse width is used, it was found convenient to cycle through 50 pulses, the interpulse period changing from 2 msec to 3 msec during the 1/8-second time. This particular pattern of interpulse periods was provided by a software radar controller. With application of this algorithm, the unwanted scatter signal becomes incoherent from one pulse to the next, and therefore is perceived as noise by the coherent integrator and correlator.
Pulse stuttering as a remedy for aliased ground backscatter
NASA Technical Reports Server (NTRS)
Bowhill, S. A.
1983-01-01
An algorithm that aides in the removal of ground scatter from low frequency Mesosphere, Stratosphere, Troposphere (MST) radar signals is examined. The unwanted ground scatter is shown as a sequence of velocity plots which are almost typical at the various altitudes. The interpulse period is changed in a cyclic way, thereby destroying the coherence of the unwanted signal. The interpulse period must be changed by an amount at least equal to the transmitted pulse width, and optimum performance is obtained when the number of different interpulse period occupies a time span greater than the coherence time of the unwanted signal. Since a 20-msec pulse width is used, it was found convenient to cycle through 50 pulses, the interpulse period changing from 2 msec to 3 msec during the 1/8-second time. This particular pattern of interpulse periods was provided by a software radar controller. With application of this algorithm, the unwanted scatter signal becomes incoherent from one pulse to the next, and therefore is perceived as noise by the coherent integrator and correlator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
Measuring the global distribution of intense convection over land with passive microwave radiometry
NASA Technical Reports Server (NTRS)
Spencer, R. W.; Santek, D. A.
1985-01-01
The global distribution of intense convective activity over land is shown to be measurable with satellite passive-microwave methods through a comparison of an empirical rain rate algorithm with a climatology of thunderstorm days for the months of June-August. With the 18 and 37 GHz channels of the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR), the strong volume scattering effects of precipitation can be measured. Even though a single frequency (37 GHz) is responsive to the scattering signature, two frequencies are needed to remove most of the effect that variations in thermometric temperatures and soil moisture have on the brightness temperatures. Because snow cover is also a volume scatterer of microwave energy at these microwavelengths, a discrimination procedure involving four of the SMMR channels is employed to separate the rain and snow classes, based upon their differences in average thermometric temperature.
Computer image processing: Geologic applications
NASA Technical Reports Server (NTRS)
Abrams, M. J.
1978-01-01
Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.
Costa, Filippo; Monorchio, Agostino; Manara, Giuliano
2016-01-01
A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841
Multicamera polarized vision for the orientation with the skylight polarization patterns
NASA Astrophysics Data System (ADS)
Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Zhang, Lilian; Wang, Yujie
2018-04-01
A robust orientation algorithm based on the skylight polarization patterns for the urban ground vehicle is presented. We present the orientation model with the Rayleigh scattering and propose the robust orientation algorithm with the total least square. The proposed algorithm can utilize the whole sky area polarization patterns for realizing a more robust and accurate orientation. To enhance the algorithm's robustness in the urban environment, we develop a real-time method that uses the gradient of the degree of the polarization to remove the obstacles in the polarization image. In addition, our algorithm can solve the ambiguity problem of the polarized orientation without any other sensors. We also conduct a static rotating and a dynamic car experiments to evaluate the algorithm. The results demonstrate that our proposed algorithm can provide an accurate orientation estimation for the ground vehicle in the open and urban environments-the root-mean-square error in the static experiment is 0.28 deg and in the dynamic experiment is 0.81 deg. Finally, we discuss insights gained with respect to further work in optics and robotics.
Oscillometric Blood Pressure Estimation: Past, Present, and Future.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2015-01-01
The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Choi, C. K.; Tsoukalas, L. H.
2016-08-01
The potential non-proliferation monitoring of spent nuclear fuel sealed in dry casks interacting continuously with the naturally generated cosmic ray muons is investigated. Treatments on the muon RMS scattering angle by Moliere, Rossi-Greisen, Highland and, Lynch-Dahl were analyzed and compared with simplified Monte Carlo simulations. The Lynch-Dahl expression has the lowest error and appears to be appropriate when performing conceptual calculations for high-Z, thick targets such as dry casks. The GEANT4 Monte Carlo code was used to simulate dry casks with various fuel loadings and scattering variance estimates for each case were obtained. The scattering variance estimation was shown to be unbiased and using Chebyshev's inequality, it was found that 106 muons will provide estimates of the scattering variances that are within 1% of the true value at a 99% confidence level. These estimates were used as reference values to calculate scattering distributions and evaluate the asymptotic behavior for small variations on fuel loading. It is shown that the scattering distributions between a fully loaded dry cask and one with a fuel assembly missing initially overlap significantly but their distance eventually increases with increasing number of muons. One missing fuel assembly can be distinguished from a fully loaded cask with a small overlapping between the distributions which is the case of 100,000 muons. This indicates that the removal of a standard fuel assembly can be identified using muons providing that enough muons are collected. A Bayesian algorithm was developed to classify dry casks and provide a decision rule that minimizes the risk of making an incorrect decision. The algorithm performance was evaluated and the lower detection limit was determined.
Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography
NASA Astrophysics Data System (ADS)
Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.
2014-11-01
Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.
The SASS scattering coefficient algorithm. [Seasat-A Satellite Scatterometer
NASA Technical Reports Server (NTRS)
Bracalente, E. M.; Grantham, W. L.; Boggs, D. H.; Sweet, J. L.
1980-01-01
This paper describes the algorithms used to convert engineering unit data obtained from the Seasat-A satellite scatterometer (SASS) to radar scattering coefficients and associated supporting parameters. A description is given of the instrument receiver and related processing used by the scatterometer to measure signal power backscattered from the earth's surface. The applicable radar equation used for determining scattering coefficient is derived. Sample results of SASS data processed through current algorithm development facility (ADF) scattering coefficient algorithms are presented which include scattering coefficient values for both water and land surfaces. Scattering coefficient signatures for these two surface types are seen to have distinctly different characteristics. Scattering coefficient measurements of the Amazon rain forest indicate the usefulness of this type of data as a stable calibration reference target.
Chan, Eugene; Rose, L R Francis; Wang, Chun H
2015-05-01
Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
NASA Astrophysics Data System (ADS)
Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma
2013-12-01
satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.
Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H
2018-06-06
Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.
René de Cotret, Laurent P; Siwick, Bradley J
2017-07-01
The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.
Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band
NASA Astrophysics Data System (ADS)
Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.
2013-12-01
In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering coefficients: σHH, σVV and σHV. The inversion process, which is not an ill-posed problem, uses the non-linear optimization method of Levenberg-Marquardt and estimates the three model parameters: vegetation aboveground biomass, average soil moisture and surface roughness. The model analytical formulation will be first recalled and sensitivity analyses will be shown. Then some results obtained with real SAR data will be presented and compared to ground estimates.
Gamma-ray momentum reconstruction from Compton electron trajectories by filtered back-projection
Haefner, A.; Gunter, D.; Plimley, B.; ...
2014-11-03
Gamma-ray imaging utilizing Compton scattering has traditionally relied on measuring coincident gamma-ray interactions to map directional information of the source distribution. This coincidence requirement makes it an inherently inefficient process. We present an approach to gamma-ray reconstruction from Compton scattering that requires only a single electron tracking detector, thus removing the coincidence requirement. From the Compton scattered electron momentum distribution, our algorithm analytically computes the incident photon's correlated direction and energy distributions. Because this method maps the source energy and location, it is useful in applications, where prior information about the source distribution is unknown. We demonstrate this method withmore » electron tracks measured in a scientific Si charge coupled device. While this method was demonstrated with electron tracks in a Si-based detector, it is applicable to any detector that can measure electron direction and energy, or equivalently the electron momentum. For example, it can increase the sensitivity to obtain energy and direction in gas-based systems that suffer from limited efficiency.« less
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
The Effect of Sub-Aperture in DRIA Framework Applied on Multi-Aspect PolSAR Data
NASA Astrophysics Data System (ADS)
Xue, Feiteng; Yin, Qiang; Lin, Yun; Hong, Wen
2016-08-01
Multi-aspect SAR is a new remote sensing technology, achieves consecutive data in large look angle as platform moves. Multi- aspect observation brings higher resolution and SNR to SAR picture. Multi-aspect PolSAR data can increase the accuracy of target identify and classification because it contains the 3-D polarimetric scattering properties.DRIA(detecting-removing-incoherent-adding)framework is a multi-aspect PolSAR data processing method. In this method, the anisotropic and isotropic scattering is separated by maximum- likelihood ratio test. The anisotropic scattering is removed to gain a removal series. The isotropic scattering is incoherent added to gain a high resolution picture. The removal series describes the anisotropic scattering property and is used in features extraction and classification.This article focuses on the effect brought by difference of sub-aperture numbers in anisotropic scattering detection and removal. The more sub-apertures are, the less look angle is. Artificial target has anisotropic scattering because of Bragg resonances. The increase of sub-aperture number brings more accurate observation in azimuth though the quality of each single image may loss. The accuracy of classification in agricultural fields is affected by the anisotropic scattering brought by Bragg resonances. The size of the sub-aperture has a significant effect in the removal result of Bragg resonances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H; Kong, V; Jin, J
Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid movesmore » back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less
Time-frequency analysis of acoustic scattering from elastic objects
NASA Astrophysics Data System (ADS)
Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.
1990-06-01
A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.
Real-time single image dehazing based on dark channel prior theory and guided filtering
NASA Astrophysics Data System (ADS)
Zhang, Zan
2017-10-01
Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, G; Feng, Z; Yin, Y
2016-06-15
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator.more » The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less
WE-EF-207-10: Striped Ratio Grids: A New Concept for Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, S
2015-06-15
Purpose: To propose a new method for estimating scatter in x-ray imaging. We propose the “striped ratio grid,” an anti-scatter grid with alternating stripes of high scatter rejection (attained, for example, by high grid ratio) and low scatter rejection. To minimize artifacts, stripes are oriented parallel to the direction of the ramp filter. Signal discontinuities at the boundaries between stripes provide information on local scatter content, although these discontinuities are contaminated by variation in primary radiation. Methods: We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid, andmore » processed them together to mimic a striped ratio grid. Two phantoms were scanned with the emulated striped ratio grid and compared with a conventional anti-scatter grid and a fan-beam acquisition, which served as ground truth. A nonlinear image processing algorithm was developed to mitigate the problem of primary variation. Results: The emulated striped ratio grid reduced scatter more effectively than the conventional grid alone. Contrast is thereby improved in projection imaging. In CT imaging, cupping is markedly reduced. Artifacts introduced by the striped ratio grid appear to be minimal. Conclusion: Striped ratio grids could be a simple and effective evolution of conventional anti-scatter grids. Unlike several other approaches currently under investigation for scatter management, striped ratio grids require minimal computation, little new hardware (at least for systems which already use removable grids) and impose few assumptions on the nature of the object being scanned.« less
Measurement and calibration of differential Mueller matrix of distributed targets
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-01-01
A rigorous method for calibrating polarimetric backscatter measurements of distributed targets is presented. By characterizing the radar distortions over the entire mainlobe of the antenna, the differential Mueller matrix is derived from the measured scattering matrices with a high degree of accuracy. It is shown that the radar distortions can be determined by measuring the polarimetric response of a metallic sphere over the main lobe of the antenna. Comparison of results obtained with the new algorithm with the results derived from the old calibration method show that the discrepancy between the two methods is less than 1 dB for the backscattering coefficients. The discrepancy is more drastic for the phase-difference statistics, indicating that removal of the radar distortions from the cross products of the scattering matrix elements cannot be accomplished with the traditional calibration methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, W; Jung, J; Kang, Y
Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration tomore » Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.« less
MUSIC algorithms for rebar detection
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela
2013-12-01
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Zhang, Qitan; Yang, Defu; Liang, Jimin
2014-01-01
To provide an ideal solution for a specific problem of gastric cancer detection in which low-scattering regions simultaneously existed with both the non- and high-scattering regions, a novel hybrid radiosity-SP3 equation based reconstruction algorithm for bioluminescence tomography was proposed in this paper. In the algorithm, the third-order simplified spherical harmonics approximation (SP3) was combined with the radiosity equation to describe the bioluminescent light propagation in tissues, which provided acceptable accuracy for the turbid medium with both low- and non-scattering regions. The performance of the algorithm was evaluated with digital mouse based simulations and a gastric cancer-bearing mouse based in situ experiment. Primary results demonstrated the feasibility and superiority of the proposed algorithm for the turbid medium with low- and non-scattering regions.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Focusing light through random scattering media by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-01-01
The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.
Paternò, Gianfranco; Cardarelli, Paolo; Contillo, Adriano; Gambaccini, Mauro; Taibi, Angelo
2018-01-01
Advanced applications of digital mammography such as dual-energy and tomosynthesis require multiple exposures and thus deliver higher dose compared to standard mammograms. A straightforward manner to reduce patient dose without affecting image quality would be removal of the anti-scatter grid, provided that the involved reconstruction algorithms are able to take the scatter figure into account [1]. Monte Carlo simulations are very well suited for the calculation of X-ray scatter distribution and can be used to integrate such information within the reconstruction software. Geant4 is an open source C++ particle tracking code widely used in several physical fields, including medical physics [2,3]. However, the coherent scattering cross section used by the standard Geant4 code does not take into account the influence of molecular interference. According to the independent atomic scattering approximation (the so-called free-atom model), coherent radiation is indistinguishable from primary radiation because its angular distribution is peaked in the forward direction. Since interference effects occur between x-rays scattered by neighbouring atoms in matter, it was shown experimentally that the scatter distribution is affected by the molecular structure of the target, even in amorphous materials. The most important consequence is that the coherent scatter distribution is not peaked in the forward direction, and the position of the maximum is strongly material-dependent [4]. In this contribution, we present the implementation of a method to take into account inter-atomic interference in small-angle coherent scattering in Geant4, including a dedicated data set of suitable molecular form factor values for several materials of clinical interest. Furthermore, we present scatter images of simple geometric phantoms in which the Rayleigh contribution is rigorously evaluated. Copyright © 2017.
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
The Born approximation, multiple scattering, and the butterfly algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alejandro F.
Radar works by focusing a beam of light and seeing how long it takes to reflect. To see a large region the beam is pointed in different directions. The focus of the beam depends on the size of the antenna (called an aperture). Synthetic aperture radar (SAR) works by moving the antenna through some region of space. A fundamental assumption in SAR is that waves only bounce once. Several imaging algorithms have been designed using that assumption. The scattering process can be described by iterations of a badly behaving integral. Recently a method for efficiently evaluating these types of integrals has been developed. We will give a detailed implementation of this algorithm and apply it to study the multiple scattering effects in SAR using target estimates from single scattering algorithms.
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
NASA Astrophysics Data System (ADS)
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
Deployment Optimization for Embedded Flight Avionics Systems
2011-11-01
the iterations, the best solution(s) that evolved out from the group is output as the result. Although metaheuristic algorithms are powerful, they...that other design constraints are met—ScatterD uses metaheuristic algorithms to seed the bin-packing algorithm . In particular, metaheuristic ... metaheuristic algorithms to search the design space—and then using bin-packing to allocate software tasks to processors—ScatterD can generate
Deconvolving instrumental and intrinsic broadening in core-shell x-ray spectroscopies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fister, T. T.; Seidler, G. T.; Rehr, J. J.
2007-05-01
Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in core-shell spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of {approx}9.3 eV full width at half maximum broadeningmore » from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm in these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime.« less
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
Membership-degree preserving discriminant analysis with applications to face recognition.
Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun
2013-01-01
In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model
NASA Astrophysics Data System (ADS)
Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge
2016-12-01
A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.
Linearized inversion of multiple scattering seismic energy
NASA Astrophysics Data System (ADS)
Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad
2014-05-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Y; Wu, P; Mao, T
2016-06-15
Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel
2015-12-20
Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any methodmore » to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.« less
NASA Astrophysics Data System (ADS)
Lee, Jaehwa; Hsu, N. Christina; Sayer, Andrew M.; Bettenhausen, Corey; Yang, Ping
2017-10-01
Aerosol Robotic Network (AERONET)-based nonspherical dust optical models are developed and applied to the Satellite Ocean Aerosol Retrieval (SOAR) algorithm as part of the Version 1 Visible Infrared Imaging Radiometer Suite (VIIRS) NASA "Deep Blue" aerosol data product suite. The optical models are created using Version 2 AERONET inversion data at six distinct sites influenced frequently by dust aerosols from different source regions. The same spheroid shape distribution as used in the AERONET inversion algorithm is assumed to account for the nonspherical characteristics of mineral dust, which ensures the consistency between the bulk scattering properties of the developed optical models and the AERONET-retrieved microphysical and optical properties. For the Version 1 SOAR aerosol product, the dust optical model representative for Capo Verde site is used, considering the strong influence of Saharan dust over the global ocean in terms of amount and spatial coverage. Comparisons of the VIIRS-retrieved aerosol optical properties against AERONET direct-Sun observations at five island/coastal sites suggest that the use of nonspherical dust optical models significantly improves the retrievals of aerosol optical depth (AOD) and Ångström exponent by mitigating the well-known artifact of scattering angle dependence of the variables, which is observed when incorrectly assuming spherical dust. The resulting removal of these artifacts results in a more natural spatial pattern of AOD along the transport path of Saharan dust to the Atlantic Ocean; that is, AOD decreases with increasing distance transported, whereas the spherical assumption leads to a strong wave pattern due to the spurious scattering angle dependence of AOD.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Ocean observations with EOS/MODIS: Algorithm development and post launch studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1996-01-01
An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.
New algorithm and system for measuring size distribution of blood cells
NASA Astrophysics Data System (ADS)
Yao, Cuiping; Li, Zheng; Zhang, Zhenxi
2004-06-01
In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.
NASA Astrophysics Data System (ADS)
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Ten Years of Cloud Optical and Microphysical Retrievals from MODIS
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana
2010-01-01
The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Ho, Derek; Drake, Tyler K.; Bentley, Rex C.; Valea, Fidel A.; Wax, Adam
2015-01-01
We evaluate a new hybrid algorithm for determining nuclear morphology using angle-resolved low coherence interferometry (a/LCI) measurements in ex vivo cervical tissue. The algorithm combines Mie theory based and continuous wavelet transform inverse light scattering analysis. The hybrid algorithm was validated and compared to traditional Mie theory based analysis using an ex vivo tissue data set. The hybrid algorithm achieved 100% agreement with pathology in distinguishing dysplastic and non-dysplastic biopsy sites in the pilot study. Significantly, the new algorithm performed over four times faster than traditional Mie theory based analysis. PMID:26309741
Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.
Ricci, E; Di Domenico, S; Cianca, E; Rossi, T
2015-01-01
Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
Real-time particulate mass measurement based on laser scattering
NASA Astrophysics Data System (ADS)
Rentz, Julia H.; Mansur, David; Vaillancourt, Robert; Schundler, Elizabeth; Evans, Thomas
2005-11-01
OPTRA has developed a new approach to the determination of particulate size distribution from a measured, composite, laser angular scatter pattern. Drawing from the field of infrared spectroscopy, OPTRA has employed a multicomponent analysis technique which uniquely recognizes patterns associated with each particle size "bin" over a broad range of sizes. The technique is particularly appropriate for overlapping patterns where large signals are potentially obscuring weak ones. OPTRA has also investigated a method for accurately training the algorithms without the use of representative particles for any given application. This streamlined calibration applies a one-time measured "instrument function" to theoretical Mie patterns to create the training data for the algorithms. OPTRA has demonstrated this algorithmic technique on a compact, rugged, laser scatter sensor head we developed for gas turbine engine emissions measurements. The sensor contains a miniature violet solid state laser and an array of silicon photodiodes, both of which are commercial off the shelf. The algorithmic technique can also be used with any commercially available laser scatter system.
A rapid detection method of Escherichia coli by surface enhanced Raman scattering
NASA Astrophysics Data System (ADS)
Tao, Feifei; Peng, Yankun; Xu, Tianfeng
2015-05-01
Conventional microbiological detection and enumeration methods are time-consuming, labor-intensive, and giving retrospective information. The objectives of the present work are to study the capability of surface enhanced Raman scattering (SERS) to detect Escherichia coli (E. coli) using the presented silver colloidal substrate. The obtained results showed that the adaptive iteratively reweighed Penalized Least Squares (airPLS) algorithm could effectively remove the fluorescent background from original Raman spectra, and Raman characteristic peaks of 558, 682, 726, 1128, 1210 and 1328 cm-1 could be observed stably in the baseline corrected SERS spectra of all studied bacterial concentrations. The detection limit of SERS could be determined to be as low as 0.73 log CFU/ml for E. coli with the prepared silver colloidal substrate. The quantitative prediction results using the intensity values of characteristic peaks were not good, with the correlation coefficients of calibration set and cross validation set of 0.99 and 0.64, respectively.
Ho, Derek; Kim, Sanghoon; Drake, Tyler K.; Eldridge, Will J.; Wax, Adam
2014-01-01
We present a fast approach for size determination of spherical scatterers using the continuous wavelet transform of the angular light scattering profile to address the computational limitations of previously developed sizing techniques. The potential accuracy, speed, and robustness of the algorithm were determined in simulated models of scattering by polystyrene beads and cells. The algorithm was tested experimentally on angular light scattering data from polystyrene bead phantoms and MCF-7 breast cancer cells using a 2D a/LCI system. Theoretical sizing of simulated profiles of beads and cells produced strong fits between calculated and actual size (r2 = 0.9969 and r2 = 0.9979 respectively), and experimental size determinations were accurate to within one micron. PMID:25360350
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
Willert, Jeffrey; Park, H.; Taitano, William
2015-11-01
High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.
Intraocular scattering compensation in retinal imaging
Christaras, Dimitrios; Ginis, Harilaos; Pennos, Alexandros; Artal, Pablo
2016-01-01
Intraocular scattering affects fundus imaging in a similar way that affects vision; it causes a decrease in contrast which depends on both the intrinsic scattering of the eye but also on the dynamic range of the image. Consequently, in cases where the absolute intensity in the fundus image is important, scattering can lead to a wrong estimation. In this paper, a setup capable of acquiring fundus images and estimating objectively intraocular scattering was built, and the acquired images were then used for scattering compensation in fundus imaging. The method consists of two parts: first, reconstruct the individual’s wide-angle Point Spread Function (PSF) at a specific wavelength to be used within an enhancement algorithm on an acquired fundus image to compensate for scattering. As a proof of concept, a single pass measurement with a scatter filter was carried out first and the complete algorithm of the PSF reconstruction and the scattering compensation was applied. The advantage of the single pass test is that one can compare the reconstructed image with the original one and see the validity, thus testing the efficiency of the method. Following the test, the algorithm was applied in actual fundus images in human eyes and the effect on the contrast of the image before and after the compensation was compared. The comparison showed that depending on the wavelength, contrast can be reduced by 8.6% under certain conditions. PMID:27867710
NASA Astrophysics Data System (ADS)
Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.
2018-04-01
Optical structure disturbances localization algorithm for time-resolved diffuse optical tomography of biological objects is described. The key features of the presented algorithm are: the initial approximation for the spatial distribution of the optical characteristics based on the Homogeneity Index and the assumption that all the absorbing and scattering inhomogeneities in an investigated object are spherical and have the same absorption and scattering coefficients. The described algorithm can be used in the brain structures diagnosis, in traumatology and optical mammography.
Impulsive noise removal from color video with morphological filtering
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
A RT-based Technique for the Analysis and the Removal of Titan's Atmosphere by Cassini/VIMS-IR data
NASA Astrophysics Data System (ADS)
Sindoni, G.; Tosi, F.; Adriani, A.; Moriconi, M. L.; D'Aversa, E.; Grassi, D.; Oliva, F.; Dinelli, B. M.; Castelli, E.
2015-12-01
Since 2004, the Visual and Infrared Mapping Spectrometer (VIMS), together with the CIRS and UVIS spectrometers, aboard the Cassini spacecraft has provided insight on Saturn and Titan atmospheres through remote sensing observations. The presence of clouds and aerosols in Titan's dense atmosphere makes the analysis of the surface radiation a difficult task. For this purpose, an atmospheric radiative transfer (RT) model is required. The implementation of a RT code, which includes multiple scattering, in an inversion algorithm based on the Bayesian approach, can provide strong constraints about both the surface albedo and the atmospheric composition. The application of this retrieval procedure we have developed to VIMS-IR spectra acquired in nadir or slant geometries allows us to retrieve the equivalent opacity of Titan's atmosphere in terms of variable aerosols and gaseous content. Thus, the separation of the atmospheric and surface contributions in the observed spectrum is possible. The atmospheric removal procedure was tested on the spectral range 1-2.2μm of publicly available VIMS data covering the Ontario Lacus and Ligeia Mare regions. The retrieval of the accurate composition of Titan's atmosphere is a much more complex task. So far, the information about the vertical structure of the atmosphere by limb spectra was mostly derived under conditions where the scattering could be neglected [1,2]. Indeed, since the very high aerosol load in the middle-low atmosphere produces strong scattering effects on the measured spectra, the analysis requires a RT modeling taking into account multiple scattering in a spherical-shell geometry. Therefore the use of an innovative method we are developing based on the Monte-Carlo approach, can provide important information about the vertical distribution of the aerosols and the gases composing Titan's atmosphere.[1]Bellucci et al., (2009). Icarus, 201, Issue 1, p. 198-216.[2]de Kok et al., (2007). Icarus, 191, Issue 1, p. 223-235.
Optical transillumination tomography with tolerance against refraction mismatch.
Haidekker, Mark A
2005-12-01
Optical transillumination tomography (OT) is a laser-based imaging modality where ballistic photons are used for projection generation. Image reconstruction is therefore similar to X-ray computed tomography. This modality promises fast image acquisition, good resolution and contrast, and inexpensive instrumentation for imaging of weakly scattering objects, such as for example tissue-engineered constructs. In spite of its advantages, OT is not widely used. One reason is its sensitivity towards changes in material refractive index along the light path. Beam refraction artefacts cause areas of overestimated tissue density and blur geometric details. A spatial filter, introduced into the beam path to eliminate scattered photons, will also remove refracted photons from the projections. In the projections, zones affected by refraction can be detected by thresholding. By using algebraic reconstruction techniques (ART) in conjunction with suitable interpolation algorithms, reconstruction artefacts can be partly avoided. Reconstructions from a test image were performed. Standard filtered backprojection (FBP) showed a round mean square (RMS) deviation from the original image of 9.9. RMS deviation with refraction-tolerant ART reconstruction was 0.33 and 0.24, depending on the algorithm, compared to 0.57 (FBP) and 0.06 (ART) in a non-refracting case. In addition, modified ART reconstruction allowed detection of small geometric details that were invisible in standard reconstructions. Refraction-tolerant ART may be the key to eliminating one of the major challenges of OT.
Radar Polarimetry: Theory, Analysis, and Applications
NASA Astrophysics Data System (ADS)
Hubbert, John Clark
The fields of radar polarimetry and optical polarimetry are compared. The mathematics of optic polarimetry are formulated such that a local right handed coordinate system is always used to describe the polarization states. This is not done in radar polarimetry. Radar optimum polarization theory is redeveloped within the framework of optical polarimetry. The radar optimum polarizations and optic eigenvalues of common scatterers are compared. In addition a novel definition of an eigenpolarization state is given and the accompanying mathematics is developed. The polarization response calculated using optic, radar and novel definitions is presented for a variety of scatterers. Polarimetric transformation provides a means to characterize scatters in more than one polarization basis. Polarimetric transformation for an ensemble of scatters is obtained via two methods: (1) the covariance method and (2) the instantaneous scattering matrix (ISM) method. The covariance method is used to relate the mean radar parameters of a +/-45^circ linear polarization basis to those of a horizontal and vertical polarization basis. In contrast the ISM method transforms the individual time samples. Algorithms are developed for transforming the time series from fully polarimetric radars that switch between orthogonal states. The transformed time series are then used to calculate the mean radar parameters of interest. It is also shown that propagation effects do not need to be removed from the ISM's before transformation. The techniques are demonstrated using data collected by POLDIRAD, the German Aerospace Research Establishment's fully polarimetric C-band radar. The differential phase observed between two copolar states, Psi_{CO}, is composed of two phases: (1) differential propagation phase, phi_{DP}, and (2) differential backscatter phase, delta. The slope of phi_{DP } with range is an estimate of the specific differential phase, K_{DP}. The process of estimating K_{DP} is complicated when delta is present. Algorithms are presented for estimating delta and K_{DP} from range profiles of Psi_ {CO}. Also discussed are procedures for the estimation and interpretation of other radar measurables such as reflectivity, Z_{HH}, differential reflectivity, Z_{DR }, the magnitude of the copolar correlation coefficient, rho_{HV}(0), and Doppler spectrum width, sigma _{v}. The techniques are again illustrated with data collected by POLDIRAD.
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Superscattering of light optimized by a genetic algorithm
NASA Astrophysics Data System (ADS)
Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.
2014-07-01
We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
A new statistical PCA-ICA algorithm for location of R-peaks in ECG.
Chawla, M P S; Verma, H K; Kumar, Vinod
2008-09-16
The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.
Recent advances in time series InSAR
NASA Astrophysics Data System (ADS)
Hooper, Andrew; Bekaert, David; Spaans, Karsten
2010-05-01
Despite the multiple successes of InSAR at measuring surface displacement, in many instances the signal over much of an image either decorrelates too quickly to be useful or is swamped by atmospheric noise. Time series InSAR methods seek to address these issues by essentially increasing the signal-to-noise ratio (SNR) through the use of more data. These techniques are particularly useful for applications where the strain rates detected at the surface are low, such as postseismic/interseismic motion, magma/fluid movement, landslides and reservoir exploitation. Our previous developments in this field have included a persistent scatterer algorithm based on spatial correlation, a full resolution small baseline approach based on the same strategy, and procedure for combining the two [Hooper, GRL, 2008]. This combined method works well on small areas (up to one frame) at ERS or Envisat strip-map resolution. However, in applying it to larger areas, such as the Guerrero region of Mexico and western Anatolia in Turkey, or when processing data at higher resolution, e.g. from TerraSAR-X, computer resource problems can arise. We have therefore altered the processing strategy to involve smarter use of computer memory. Further improvement is achieved by the resampling of the selected pixels (whether persistent scatterers or distributed scatterers) to a coarser resolution - usually we do not require a resolution on the scale of individual resolution cells for geophysical applications. Aliasing is avoided by summing the phase of nearby selected pixels, weighted according to their estimated SNR. This is akin to smart multilooking, but note that better results can be achieved than by starting the analysis with low-resolution (multilooked) data. Another development concerns selecting pixels only in images where they appear reliable. This allows for resolution cells that become correlated/decorrelated either in a temporary fashion, e.g., due to snow cover, or in a permanent way due to the appearance or removal of scatterers. The detection algorithm relies on the degree of spatial correlation for the pixel of interest in each image. We have also modified our 3-D phase-unwrapping algorithms to allow for the resulting differing combinations of coherent pixels in every interferogram. We demonstrate our improved techniques on volcanoes in Iceland and the 2006 slow-slip event in Guerrero, Mexico.
Removal of impulse noise clusters from color images with local order statistics
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
A generic EEG artifact removal algorithm based on the multi-channel Wiener filter
NASA Astrophysics Data System (ADS)
Somers, Ben; Francart, Tom; Bertrand, Alexander
2018-06-01
Objective. The electroencephalogram (EEG) is an essential neuro-monitoring tool for both clinical and research purposes, but is susceptible to a wide variety of undesired artifacts. Removal of these artifacts is often done using blind source separation techniques, relying on a purely data-driven transformation, which may sometimes fail to sufficiently isolate artifacts in only one or a few components. Furthermore, some algorithms perform well for specific artifacts, but not for others. In this paper, we aim to develop a generic EEG artifact removal algorithm, which allows the user to annotate a few artifact segments in the EEG recordings to inform the algorithm. Approach. We propose an algorithm based on the multi-channel Wiener filter (MWF), in which the artifact covariance matrix is replaced by a low-rank approximation based on the generalized eigenvalue decomposition. The algorithm is validated using both hybrid and real EEG data, and is compared to other algorithms frequently used for artifact removal. Main results. The MWF-based algorithm successfully removes a wide variety of artifacts with better performance than current state-of-the-art methods. Significance. Current EEG artifact removal techniques often have limited applicability due to their specificity to one kind of artifact, their complexity, or simply because they are too ‘blind’. This paper demonstrates a fast, robust and generic algorithm for removal of EEG artifacts of various types, i.e. those that were annotated as unwanted by the user.
Cross-wind profiling based on the scattered wave scintillation in a telescope focus.
Banakh, V A; Marakasov, D A; Vorontsov, M A
2007-11-20
The problem of wind profile reconstruction from scintillation of an optical wave scattered off a rough surface in a telescope focus plane is considered. Both the expression for the spatiotemporal correlation function and the algorithm of cross-wind velocity and direction profiles reconstruction based on the spatiotemporal spectrum of intensity of an optical wave scattered by a diffuse target in a turbulent atmosphere are presented. Computer simulations performed under conditions of weak optical turbulence show wind profiles reconstruction by the developed algorithm.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
Broadband Tomography System: Direct Time-Space Reconstruction Algorithm
NASA Astrophysics Data System (ADS)
Biagi, E.; Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina
1989-10-01
In this paper a new ultrasound tomographic image algorithm is presented. A complete laboratory system is built up to test the algorithm in experimental conditions. The proposed system is based on a physical model consisting of a bidimensional distribution of single scattering elements. Multiple scattering is neglected, so Born approximation is assumed. This tomographic technique only requires two orthogonal scanning sections. For each rotational position of the object, data are collected by means of the complete data set method in transmission mode. After a numeric envelope detection, the received signals are back-projected in the space-domain through a scalar function. The reconstruction of each scattering element is accomplished by correlating the ultrasound time of flight and attenuation with the points' loci given by the possible positions of the scattering element. The points' locus is represented by an ellipse with the focuses located on the transmitter and receiver positions. In the image matrix the ellipses' contributions are coherently summed in the position of the scattering element. Computer simulations of cylindrical-shaped objects have pointed out the performances of the reconstruction algorithm. Preliminary experimental results show the laboratory system features. On the basis of these results an experimental procedure to test the confidence and repeatability of ultrasonic measurements on human carotid vessel is proposed.
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min
2018-02-01
Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal.
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.
NASA Astrophysics Data System (ADS)
Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.
2016-05-01
Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, A; Casares-Magaz, O; Elstroem, U
Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less
Remote sensing of the diffuse attenuation coefficient of ocean water. [coastal zone color scanner
NASA Technical Reports Server (NTRS)
Austin, R. W.
1981-01-01
A technique was devised which uses remotely sensed spectral radiances from the sea to assess the optical diffuse attenuation coefficient, K (lambda) of near-surface ocean water. With spectral image data from a sensor such as the coastal zone color scanner (CZCS) carried on NIMBUS-7, it is possible to rapidly compute the K (lambda) fields for large ocean areas and obtain K "images" which show synoptic, spatial distribution of this attenuation coefficient. The technique utilizes a relationship that has been determined between the value of K and the ratio of the upwelling radiances leaving the sea surface at two wavelengths. The relationship was developed to provide an algorithm for inferring K from the radiance images obtained by the CZCS, thus the wavelengths were selected from those used by this sensor, viz., 443, 520, 550 and 670 nm. The majority of the radiance arriving at the spacecraft is the result of scattering in the atmospheric and is unrelated to the radiance signal generated by the water. A necessary step in the processing of the data received by the sensor is, therefore, the effective removal of these atmospheric path radiance signals before the K algorithm is applied. Examples of the efficacy of these removal techniques are given together with examples of the spatial distributions of K in several ocean areas.
An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1992-01-01
Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
begin{center} MUSIC Algorithms for Rebar Detection
NASA Astrophysics Data System (ADS)
Leone, G.; Solimene, R.
2012-04-01
In this contribution we consider the problem of detecting and localizing small cross section, with respect to the wavelength, scatterers from their scattered field once a known incident field interrogated the scene where they reside. A pertinent applicative context is rebar detection within concrete pillar. For such a case, scatterers to be detected are represented by rebars themselves or by voids due to their lacking. In both cases, as scatterers have point-like support, a subspace projection method can be conveniently exploited [1]. However, as the field scattered by rebars is stronger than the one due to voids, it is expected that the latter can be difficult to be detected. In order to circumvent this problem, in this contribution we adopt a two-step MUltiple SIgnal Classification (MUSIC) detection algorithm. In particular, the first stage aims at detecting rebars. Once rebar are detected, their positions are exploited to update the Green's function and then a further detection scheme is run to locate voids. However, in this second case, background medium encompasses also the rabars. The analysis is conducted numerically for a simplified two-dimensional scalar scattering geometry. More in detail, as is usual in MUSIC algorithm, a multi-view/multi-static single-frequency configuration is considered [2]. Baratonia, G. Leone, R. Pierri, R. Solimene, "Fault Detection in Grid Scattering by a Time-Reversal MUSIC Approach," Porc. Of ICEAA 2011, Turin, 2011. E. A. Marengo, F. K. Gruber, "Subspace-Based Localization and Inverse Scattering of Multiply Scattering Point Targets," EURASIP Journal on Advances in Signal Processing, 2007, Article ID 17342, 16 pages (2007).
Memory sparing, fast scattering formalism for rigorous diffraction modeling
NASA Astrophysics Data System (ADS)
Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.
2017-07-01
The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.
Hanging drop crystal growth apparatus
NASA Technical Reports Server (NTRS)
Naumann, Robert J. (Inventor); Witherow, William K. (Inventor); Carter, Daniel C. (Inventor); Bugg, Charles E. (Inventor); Suddath, Fred L. (Inventor)
1990-01-01
This invention relates generally to control systems for controlling crystal growth, and more particularly to such a system which uses a beam of light refracted by the fluid in which crystals are growing to detect concentration of solutes in the liquid. In a hanging drop apparatus, a laser beam is directed onto drop which refracts the laser light into primary and secondary bows, respectively, which in turn fall upon linear diode detector arrays. As concentration of solutes in drop increases due to solvent removal, these bows move farther apart on the arrays, with the relative separation being detected by arrays and used by a computer to adjust solvent vapor transport from the drop. A forward scattering detector is used to detect crystal nucleation in drop, and a humidity detector is used, in one embodiment, to detect relative humidity in the enclosure wherein drop is suspended. The novelty of this invention lies in utilizing angular variance of light refracted from drop to infer, by a computer algorithm, concentration of solutes therein. Additional novelty is believed to lie in using a forward scattering detector to detect nucleating crystallites in drop.
Scattering calculation and image reconstruction using elevation-focused beams
Duncan, David P.; Astheimer, Jeffrey P.; Waag, Robert C.
2009-01-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering. PMID:19425653
Scattering calculation and image reconstruction using elevation-focused beams.
Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C
2009-05-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Chengguang; Drinkwater, Bruce W.
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less
Development of a 3D muon disappearance algorithm for muon scattering tomography
NASA Astrophysics Data System (ADS)
Blackwell, T. B.; Kudryavtsev, V. A.
2015-05-01
Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.
NASA Astrophysics Data System (ADS)
Suleiman, R. M.; Chance, K.; Liu, X.; Kurosu, T. P.; Gonzalez Abad, G.
2014-12-01
We present and discuss a detailed description of the retrieval algorithms for the OMI BrO product. The BrO algorithms are based on direct fitting of radiances from 319.0-347.5 nm. Radiances are modeled from the solar irradiance, attenuated and adjusted by contributions from the target gas and interfering gases, rotational Raman scattering, undersampling, additive and multiplicative closure polynomials and a common mode spectrum. The version of the algorithm used for both BrO includes relevant changes with respect to the operational code, including the fit of the O2-O2 collisional complex, updates in the high resolution solar reference spectrum, updates in spectroscopy, an updated Air Mass Factor (AMF) calculation scheme, and the inclusion of scattering weights and vertical profiles in the level 2 products. Updates to the algorithms include accurate scattering weights and air mass factor calculations, scattering weights and profiles in outputs and available cross sections. We include retrieval parameter and window optimization to reduce the interference from O3, HCHO, O2-O2, SO2, improve fitting accuracy and uncertainty, reduce striping, and improve the long-term stability. We validate OMI BrO with ground-based measurements from Harestua and with chemical transport model simulations. We analyze the global distribution and seasonal variation of BrO and investigate BrO emissions from volcanoes and salt lakes.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming
1990-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.
1989-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
Enhancing scattering images for orientation recovery with diffusion map
Winter, Martin; Saalmann, Ulf; Rost, Jan M.
2016-02-12
We explore the possibility for orientation recovery in single-molecule coherent diffractive imaging with diffusion map. This algorithm approximates the Laplace-Beltrami operator, which we diagonalize with a metric that corresponds to the mapping of Euler angles onto scattering images. While suitable for images of objects with specific properties we show why this approach fails for realistic molecules. Here, we introduce a modification of the form factor in the scattering images which facilitates the orientation recovery and should be suitable for all recovery algorithms based on the distance of individual images. (C) 2016 Optical Society of America
Multiple scattering and the density distribution of a Cs MOT.
Overstreet, K; Zabawa, P; Tallant, J; Schwettmann, A; Shaffer, J
2005-11-28
Multiple scattering is studied in a Cs magneto-optical trap (MOT). We use two Abel inversion algorithms to recover density distributions of the MOT from fluorescence images. Deviations of the density distribution from a Gaussian are attributed to multiple scattering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
Monte Carlo calculation of large and small-angle electron scattering in air
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; Farmer, W. A.; Friedman, A.; Grote, D. P.; Larson, D. J.
2017-11-01
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. The algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
NASA Technical Reports Server (NTRS)
Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.
1989-01-01
The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.
Incorporation of a two metre long PET scanner in STIR
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.
2015-09-01
The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.
Ocean observations with EOS/MODIS: Algorithm development and post launch studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1995-01-01
An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.
Ground settlement monitoring from temporarily persistent scatterers between two SAR acquisitions
Lei, Z.; Xiaoli, D.; Guangcai, F.; Zhong, L.
2009-01-01
We present an improved differential interferometric synthetic aperture radar (DInSAR) analysis method that measures motions of scatterers whose phases are stable between two SAR acquisitions. Such scatterers are referred to as temporarily persistent scatterers (TPS) for simplicity. Unlike the persistent scatterer InSAR (PS-InSAR) method that relies on a time-series of interferograms, the new algorithm needs only one interferogram. TPS are identified based on pixel offsets between two SAR images, and are specially coregistered based on their estimated offsets instead of a global polynomial for the whole image. Phase unwrapping is carried out based on an algorithm for sparse data points. The method is successfully applied to measure the settlement in the Hong Kong Airport area. The buildings surrounded by vegetation were successfully selected as TPS and the tiny deformation signal over the area was detected. ??2009 IEEE.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
NASA Technical Reports Server (NTRS)
Ferraro, Ellen J.; Swift, Calvin T.
1995-01-01
This paper compares four continental ice sheet radar altimeter retracking algorithms using airborne radar and laser altimeter data taken over the Greenland ice sheet in 1991. The refurbished Advanced Application Flight Experiment (AAFE) airborne radar altimeter has a large range window and stores the entire return waveform during flight. Once the return waveforms are retracked, or post-processed to obtain the most accurate altitude measurement possible, they are compared with the high-precision Airborne Oceanographic Lidar (AOL) altimeter measurements. The AAFE waveforms show evidence of varying degrees of both surface and volume scattering from different regions of the Greenland ice sheet. The AOL laser altimeter, however, obtains a return only from the surface of the ice sheet. Retracking altimeter waveforms with a surface scattering model results in a good correlation with the laser measurements in the wet and dry-snow zones, but in the percolation region of the ice sheet, the deviation between the two data sets is large due to the effects of subsurface and volume scattering. The Martin et al model results in a lower bias than the surface scattering model, but still shows an increase in the noise level in the percolation zone. Using an Offset Center of Gravity algorithm to retrack altimeter waveforms results in measurements that are only slightly affected by subsurface and volume scattering and, despite a higher bias, this algorithm works well in all regions of the ice sheet. A cubic spline provides retracked altitudes that agree with AOL measurements over all regions of Greenland. This method is not sensitive to changes in the scattering mechanisms of the ice sheet and it has the lowest noise level and bias of all the retracking methods presented.
Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms.
Colombo, Alessandro; Galli, Davide Emilio; De Caro, Liberato; Scattarella, Francesco; Carlino, Elvio
2017-02-09
Coherent Diffractive Imaging is a lensless technique that allows imaging of matter at a spatial resolution not limited by lens aberrations. This technique exploits the measured diffraction pattern of a coherent beam scattered by periodic and non-periodic objects to retrieve spatial information. The diffracted intensity, for weak-scattering objects, is proportional to the modulus of the Fourier Transform of the object scattering function. Any phase information, needed to retrieve its scattering function, has to be retrieved by means of suitable algorithms. Here we present a new approach, based on a memetic algorithm, i.e. a hybrid genetic algorithm, to face the phase problem, which exploits the synergy of deterministic and stochastic optimization methods. The new approach has been tested on simulated data and applied to the phasing of transmission electron microscopy coherent electron diffraction data of a SrTiO 3 sample. We have been able to quantitatively retrieve the projected atomic potential, and also image the oxygen columns, which are not directly visible in the relevant high-resolution transmission electron microscopy images. Our approach proves to be a new powerful tool for the study of matter at atomic resolution and opens new perspectives in those applications in which effective phase retrieval is necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jefferson, A.; Hageman, D.; Morrow, H.
Long-term measurements of changes in the aerosol scattering coefficient hygroscopic growth at the U.S. Department of Energy Southern Great Plains site provide information on the seasonal as well as size and chemical dependence of aerosol hygroscopic growth. Annual average sub 10 um fRH values (the ratio of aerosol scattering at 85%/40% RH) were 1.75 and 1.87 for the gamma and kappa fit algorithms, respectively. The study found higher growth rates in the winter and spring seasons that correlated with high aerosol nitrate mass fraction. FRH, exhibited strong, but differing correlations with the scattering Ångström exponent and backscatter fraction, two opticalmore » size-dependent parameters. The aerosol organic fraction had a strong influence, with fRH decreasing with increases in the organic mass fraction and absorption Ångström exponent and increasing with the aerosol single scatter albedo. Uncertainty analysis if the fit algorithms revealed high uncertainty at low scattering coefficients and slight increases in uncertainty at high RH and fit parameters values.« less
Calculation of the angular radiance distribution for a coupled atmosphere and canopy
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1993-01-01
The radiative transfer equations for a coupled atmosphere and canopy are solved numerically by an improved Gauss-Seidel iteration algorithm. The radiation field is decomposed into three components: unscattered sunlight, single scattering, and multiple scattering radiance for which the corresponding equations and boundary conditions are set up and their analytical or iterational solutions are explicitly derived. The classic Gauss-Seidel algorithm has been widely applied in atmospheric research. This is its first application for calculating the multiple scattering radiance of a coupled atmosphere and canopy. This algorithm enables us to obtain the internal radiation field as well as radiances at boundaries. Any form of bidirectional reflectance distribution function (BRDF) as a boundary condition can be easily incorporated into the iteration procedure. The hotspot effect of the canopy is accommodated by means of the modification of the extinction coefficients of upward single scattering radiation and unscattered sunlight using the formulation of Nilson and Kuusk. To reduce the computation for the case of large optical thickness, an improved iteration formula is derived to speed convergence. The upwelling radiances have been evaluated for different atmospheric conditions, leaf area index (LAI), leaf angle distribution (LAD), leaf size and so on. The formulation presented in this paper is also well suited to analyze the relative magnitude of multiple scattering radiance and single scattering radiance in both the visible and near infrared regions.
NASA Astrophysics Data System (ADS)
Matthews, Christopher T.; Crepp, Justin R.; Vasisht, Gautam; Cady, Eric
2017-10-01
The electric field conjugation (EFC) algorithm has shown promise for removing scattered starlight from high-contrast imaging measurements, both in numerical simulations and laboratory experiments. To prepare for the deployment of EFC using ground-based telescopes, we investigate the response of EFC to unaccounted for deviations from an ideal optical model. We explore the linear nature of the algorithm by assessing its response to a range of inaccuracies in the optical model generally present in real systems. We find that the algorithm is particularly sensitive to unresponsive deformable mirror (DM) actuators, misalignment of the Lyot stop, and misalignment of the focal plane mask. Vibrations and DM registration appear to be less of a concern compared to values expected at the telescope. We quantify how accurately one must model these core coronagraph components to ensure successful EFC corrections. We conclude that while the condition of the DM can limit contrast, EFC may still be used to improve the sensitivity of high-contrast imaging observations. Our results have informed the development of a full EFC implementation using the Project 1640 coronagraph at Palomar observatory. While focused on a specific instrument, our results are applicable to the many coronagraphs that may be interested in employing EFC.
Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin
2017-03-01
To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.
New Additions to the ClusPro Server Motivated by CAPRI
Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E.; Xia, Bing; Hall, David R.; Kozakov, Dima
2016-01-01
The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for Small Angle X-ray Scattering (SAXS) data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. PMID:27936493
Monte Carlo calculation of large and small-angle electron scattering in air
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; ...
2017-08-12
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. In this work, the algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
NASA Astrophysics Data System (ADS)
Salin, M. B.; Dosaev, A. S.; Konkov, A. I.; Salin, B. M.
2014-07-01
Numerical simulation methods are described for the spectral characteristics of an acoustic signal scattered by multiscale surface waves. The methods include the algorithms for calculating the scattered field by the Kirchhoff method and with the use of an integral equation, as well as the algorithms of surface waves generation with allowance for nonlinear hydrodynamic effects. The paper focuses on studying the spectrum of Bragg scattering caused by surface waves whose frequency exceeds the fundamental low-frequency component of the surface waves by several octaves. The spectrum broadening of the backscattered signal is estimated. The possibility of extending the range of applicability of the computing method developed under small perturbation conditions to cases characterized by a Rayleigh parameter of ≥1 is estimated.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
NASA Astrophysics Data System (ADS)
Zhang, Siqian; Kuang, Gangyao
2014-10-01
In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.
Unsupervised classification of scattering behavior using radar polarimetry data
NASA Technical Reports Server (NTRS)
Van Zyl, Jakob J.
1989-01-01
The use of an imaging radar polarimeter data for unsupervised classification of scattering behavior is described by comparing the polarization properties of each pixel in a image to that of simple classes of scattering such as even number of reflections, odd number of reflections, and diffuse scattering. For example, when this algorithm is applied to data acquired over the San Francisco Bay area in California, it classifies scattering by the ocean as being similar to that predicted by the class of odd number of reflections, scattering by the urban area as being similar to that predicted by the class of even number of reflections, and scattering by the Golden Gate Park as being similar to that predicted by the diffuse scattering class. It also classifies the scattering by a lighthouse in the ocean and boats on the ocean surface as being similar to that predicted by the even number of reflections class, making it easy to identify these objects against the background of the surrounding ocean. The algorithm is also applied to forested areas and shows that scattering from clear-cut areas and agricultural fields is mostly similar to that predicted by the odd number of reflections class, while the scattering from tree-covered areas generally is classified as being a mixture of pixels exhibiting the characteristics of all three classes, although each pixel is identified with only a single class.
NASA Astrophysics Data System (ADS)
Loughman, Robert; Bhartia, Pawan K.; Chen, Zhong; Xu, Philippe; Nyaku, Ernest; Taha, Ghassan
2018-05-01
The theoretical basis of the Ozone Mapping and Profiler Suite (OMPS) Limb Profiler (LP) Version 1 aerosol extinction retrieval algorithm is presented. The algorithm uses an assumed bimodal lognormal aerosol size distribution to retrieve aerosol extinction profiles at 675 nm from OMPS LP radiance measurements. A first-guess aerosol extinction profile is updated by iteration using the Chahine nonlinear relaxation method, based on comparisons between the measured radiance profile at 675 nm and the radiance profile calculated by the Gauss-Seidel limb-scattering (GSLS) radiative transfer model for a spherical-shell atmosphere. This algorithm is discussed in the context of previous limb-scattering aerosol extinction retrieval algorithms, and the most significant error sources are enumerated. The retrieval algorithm is limited primarily by uncertainty about the aerosol phase function. Horizontal variations in aerosol extinction, which violate the spherical-shell atmosphere assumed in the version 1 algorithm, may also limit the quality of the retrieved aerosol extinction profiles significantly.
A rain pixel recovery algorithm for videos with highly dynamic scenes.
Jie Chen; Lap-Pui Chau
2014-03-01
Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels
NASA Astrophysics Data System (ADS)
Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.
2016-06-01
We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.
NASA Astrophysics Data System (ADS)
Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing
2014-11-01
Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.
Li, J; Guo, L-X; Zeng, H; Han, X-B
2009-06-01
A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.
Development of a non-contact diagnostic tool for high power lasers
NASA Astrophysics Data System (ADS)
Simmons, Jed A.; Guttman, Jeffrey L.; McCauley, John
2016-03-01
High power lasers in excess of 1 kW generate enough Rayleigh scatter, even in the NIR, to be detected by silicon based sensor arrays. A lens and camera system in an off-axis position can therefore be used as a non-contact diagnostic tool for high power lasers. Despite the simplicity of the concept, technical challenges have been encountered in the development of an instrument referred to as BeamWatch. These technical challenges include reducing background radiation, achieving high signal to noise ratio, reducing saturation events caused by particulates crossing the beam, correcting images to achieve accurate beam width measurements, creating algorithms for the removal of non-uniformities, and creating two simultaneous views of the beam from orthogonal directions. Background radiation in the image was reduced by the proper positioning of the back plane and the placement of absorbing materials on the internal surfaces of BeamWatch. Maximizing signal to noise ratio, important to the real-time monitoring of focus position, was aided by increasing lens throughput. The number of particulates crossing the beam path was reduced by creating a positive pressure inside BeamWatch. Algorithms in the software removed non-uniformities in the data prior to generating waist width, divergence, BPP, and M2 results. A dual axis version of BeamWatch was developed by the use of mirrors. By its nature BeamWatch produced results similar to scanning slit measurements. Scanning slit data was therefore taken and compared favorably with BeamWatch results.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Scalable, Finite Element Analysis of Electromagnetic Scattering and Radiation
NASA Technical Reports Server (NTRS)
Cwik, T.; Lou, J.; Katz, D.
1997-01-01
In this paper a method for simulating electromagnetic fields scattered from complex objects is reviewed; namely, an unstructured finite element code that does not use traditional mesh partitioning algorithms.
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Fragmenting networks by targeting collective influencers at a mesoscopic level.
Kobayashi, Teruyoshi; Masuda, Naoki
2016-11-25
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.
Fragmenting networks by targeting collective influencers at a mesoscopic level
NASA Astrophysics Data System (ADS)
Kobayashi, Teruyoshi; Masuda, Naoki
2016-11-01
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.
Fragmenting networks by targeting collective influencers at a mesoscopic level
Kobayashi, Teruyoshi; Masuda, Naoki
2016-01-01
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure. PMID:27886251
Inverse scattering approach to improving pattern recognition
NASA Astrophysics Data System (ADS)
Chapline, George; Fu, Chi-Yung
2005-05-01
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
Inverse Scattering Approach to Improving Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G; Fu, C
2005-02-15
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Tan, He-Ping
2016-11-01
A rapid computational method called generalized sourced multi-flux method (GSMFM) was developed to simulate outgoing radiative intensities in arbitrary directions at the boundary surfaces of absorbing, emitting, and scattering media which were served as input for the inverse analysis. A hybrid least-square QR decomposition-stochastic particle swarm optimization (LSQR-SPSO) algorithm based on the forward GSMFM solution was developed to simultaneously reconstruct multi-dimensional temperature distribution and absorption and scattering coefficients of the cylindrical participating media. The retrieval results for axisymmetric temperature distribution and non-axisymmetric temperature distribution indicated that the temperature distribution and scattering and absorption coefficients could be retrieved accurately using the LSQR-SPSO algorithm even with noisy data. Moreover, the influences of extinction coefficient and scattering albedo on the accuracy of the estimation were investigated, and the results suggested that the reconstruction accuracy decreased with the increase of extinction coefficient and the scattering albedo. Finally, a non-contact measurement platform of flame temperature field based on the light field imaging was set up to validate the reconstruction model experimentally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jefferson, A.; Hageman, D.; Morrow, H.
Long-term measurements of changes in the aerosol scattering coefficient hygroscopic growth at the U.S. Department of Energy Southern Great Plains site provide information on the seasonal as well as size and chemical dependence of aerosol water uptake. Annual average sub-10 μm fRH values (the ratio of aerosol scattering at 85%/40% relative humidity (RH)) were 1.78 and 1.99 for the gamma and kappa fit algorithms, respectively. Our study found higher growth rates in the winter and spring seasons that correlated with a high aerosol nitrate mass fraction. fRH exhibited strong, but differing, correlations with the scattering Ångström exponent and backscatter fraction,more » two optical size-dependent parameters. The aerosol organic mass fraction had a strong influence on fRH. Increases in the organic mass fraction and absorption Ångström exponent coincided with a decrease in fRH. Similarly, fRH declined with decreases in the aerosol single scatter albedo. The uncertainty analysis of the fit algorithms revealed high uncertainty at low scattering coefficients and increased uncertainty at high RH and fit parameters values.« less
Jefferson, A.; Hageman, D.; Morrow, H.; ...
2017-09-11
Long-term measurements of changes in the aerosol scattering coefficient hygroscopic growth at the U.S. Department of Energy Southern Great Plains site provide information on the seasonal as well as size and chemical dependence of aerosol water uptake. Annual average sub-10 μm fRH values (the ratio of aerosol scattering at 85%/40% relative humidity (RH)) were 1.78 and 1.99 for the gamma and kappa fit algorithms, respectively. Our study found higher growth rates in the winter and spring seasons that correlated with a high aerosol nitrate mass fraction. fRH exhibited strong, but differing, correlations with the scattering Ångström exponent and backscatter fraction,more » two optical size-dependent parameters. The aerosol organic mass fraction had a strong influence on fRH. Increases in the organic mass fraction and absorption Ångström exponent coincided with a decrease in fRH. Similarly, fRH declined with decreases in the aerosol single scatter albedo. The uncertainty analysis of the fit algorithms revealed high uncertainty at low scattering coefficients and increased uncertainty at high RH and fit parameters values.« less
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Implementation of an Analytical Raman Scattering Correction for Satellite Ocean-Color Processing
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Proctor, Christopher W.
2016-01-01
Raman scattering of photons by seawater molecules is an inelastic scattering process. This effect can contribute significantly to the water-leaving radiance signal observed by space-borne ocean-color spectroradiometers. If not accounted for during ocean-color processing, Raman scattering can cause biases in derived inherent optical properties (IOPs). Here we describe a Raman scattering correction (RSC) algorithm that has been integrated within NASA's standard ocean-color processing software. We tested the RSC with NASA's Generalized Inherent Optical Properties algorithm (GIOP). A comparison between derived IOPs and in situ data revealed that the magnitude of the derived backscattering coefficient and the phytoplankton absorption coefficient were reduced when the RSC was applied, whilst the absorption coefficient of colored dissolved and detrital matter remained unchanged. Importantly, our results show that the RSC did not degrade the retrieval skill of the GIOP. In addition, a timeseries study of oligotrophic waters near Bermuda showed that the RSC did not introduce unwanted temporal trends or artifacts into derived IOPs.
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2012-12-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD, and Lacis AA, Multiple scattering of light by particles, Cambridge: University Press, 2006. [5]. http://www-misr.jpl.nasa.gov/Mission/misrInstrument/ [6]. Habgood K, Arel I, Revisiting Cramer's rule for solving dense linear systems, In: Proceedings of the 2010 Spring Simulation Multiconference, Paper No 82. ISBN: 978-1-4503-0069-8. DOI: 10.1145/1878537.1878623.
Classification of simple vegetation types using POLSAR image data
NASA Technical Reports Server (NTRS)
Freeman, A.
1993-01-01
Mapping basic vegetation or land cover types is a fairly common problem in remote sensing. Knowledge of the land cover type is a key input to algorithms which estimate geophysical parameters, such as soil moisture, surface roughness, leaf area index or biomass from remotely sensed data. In an earlier paper, an algorithm for fitting a simple three-component scattering model to POLSAR data was presented. The algorithm yielded estimates for surface scatter, double-bounce scatter and volume scatter for each pixel in a POLSAR image data set. In this paper, we show how the relative levels of each of the three components can be used as inputs to simple classifier for vegetation type. Vegetation classes include no vegetation cover (e.g. bare soil or desert), low vegetation cover (e.g. grassland), moderate vegetation cover (e.g. fully developed crops), forest and urban areas. Implementation of the approach requires estimates for the three components from all three frequencies available using the NASA/JPL AIRSAR, i.e. C-, L- and P-bands. The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.
NASA Astrophysics Data System (ADS)
Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo
An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.
NASA Astrophysics Data System (ADS)
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging
NASA Astrophysics Data System (ADS)
Haynes, Mark Spencer
Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.
Algorithms for radiative transfer simulations for aerosol retrieval
NASA Astrophysics Data System (ADS)
Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko
2012-11-01
Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.
EEG Artifact Removal Using a Wavelet Neural Network
NASA Technical Reports Server (NTRS)
Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom
2011-01-01
!n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.
Wavelet tree structure based speckle noise removal for optical coherence tomography
NASA Astrophysics Data System (ADS)
Yuan, Xin; Liu, Xuan; Liu, Yang
2018-02-01
We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.
NASA Astrophysics Data System (ADS)
Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.
In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn
Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negativemore » matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.« less
NASA Technical Reports Server (NTRS)
Korkin, S.; Lyapustin, A.
2012-01-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
[Discussion of scattering in THz time domain spectrum tests].
Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han
2014-06-01
Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.
New additions to the ClusPro server motivated by CAPRI.
Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E; Xia, Bing; Hall, David R; Kozakov, Dima
2017-03-01
The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for small angle X-ray scattering data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally, we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. Proteins 2017; 85:435-444. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
NASA Technical Reports Server (NTRS)
Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)
2000-01-01
The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
NASA Astrophysics Data System (ADS)
Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi
2016-07-01
Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad
2016-02-19
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.
NASA Astrophysics Data System (ADS)
Antoine, David; Morel, Andre
1997-02-01
An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.
Development of PET projection data correction algorithm
NASA Astrophysics Data System (ADS)
Bazhanov, P. V.; Kotina, E. D.
2017-12-01
Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.
Cell light scattering characteristic numerical simulation research based on FDTD algorithm
NASA Astrophysics Data System (ADS)
Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong
2017-01-01
In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.
Kiguchi, Masashi; Funane, Tsukasa
2014-11-01
A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.
STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission
NASA Astrophysics Data System (ADS)
Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.
2018-05-01
STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.
NASA Astrophysics Data System (ADS)
Tret'yakov, Evgeniy V.; Shuvalov, Vladimir V.; Shutov, I. V.
2002-11-01
An approximate algorithm is tested for solving the problem of diffusion optical tomography in experiments on the visualisation of details of the inner structure of strongly scattering model objects containing scattering and semitransparent inclusions, as well as absorbing inclusions located inside other optical inhomogeneities. The stability of the algorithm to errors is demonstrated, which allows its use for a rapid (2 — 3 min) image reconstruction of the details of objects with a complicated inner structure.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Estimation of Soil Moisture Under Vegetation Cover at Multiple Frequencies
NASA Astrophysics Data System (ADS)
Jadghuber, Thomas; Hajnsek, Irena; Weiß, Thomas; Papathanassiou, Konstantinos P.
2015-04-01
Soil moisture under vegetation cover was estimated by a polarimetric, iterative, generalized, hybrid decomposition and inversion approach at multiple frequencies (X-, C- and L-band). Therefore the algorithm, originally designed for longer wavelength (L-band), was adapted to deal with the short wavelength scattering scenarios of X- and C-band. The Integral Equation Method (IEM) was incorporated together with a pedo-transfer function of Dobson et al. to account for the peculiarities of short wavelength scattering at X- and C-band. DLR's F-SAR system acquired fully polarimetric SAR data in X-, C- and L-band over the Wallerfing test site in Lower Bavaria, Germany in 2014. Simultaneously, soil and vegetation measurements were conducted on different agricultural test fields. The results indicate a spatially continuous inversion of soil moisture in all three frequencies (inversion rates >92%), mainly due to the careful adaption of the vegetation volume removal including a physical constraining of the decomposition algorithm. However, for X- and C-band the inversion results reveal moisture pattern inconsistencies and in some cases an incorrectly high inversion of soil moisture at X-band. The validation with in situ measurements states a stable performance of 2.1- 7.6vol.% at L-band for the entire growing period. At C- and X-band a reliable performance of 3.7-13.4vol.% in RMSE can only be achieved after distinct filtering (X- band) leading to a loss of almost 60% in spatial inversion rate. Hence, a robust inversion for soil moisture estimation under vegetation cover can only be conducted at L-band due to a constant availability of the soil signal in contrast to higher frequencies (X- and C-band).
Effects on Diagnostic Parameters After Removing Additional Synchronous Gear Meshes
NASA Technical Reports Server (NTRS)
Decker, Harry J.
2003-01-01
Gear cracks are typically difficult to diagnose with sufficient time before catastrophic damage occurs. Significant damage must be present before algorithms appear to be able to detect the damage. Frequently there are multiple gear meshes on a single shaft. Since they are all synchronous with the shaft frequency, the commonly used synchronous averaging technique is ineffective in removing other gear mesh effects. Carefully applying a filter to these extraneous gear mesh frequencies can reduce the overall vibration signal and increase the accuracy of commonly used vibration metrics. The vibration signals from three seeded fault tests were analyzed using this filtering procedure. Both the filtered and unfiltered vibration signals were then analyzed using commonly used fault detection metrics and compared. The tests were conducted on aerospace quality spur gears in a test rig. The tests were conducted at speeds ranging from 2500 to 5000 revolutions per minute and torques from 184 to 228 percent of design load. The inability to detect these cracks with high confidence results from the high loading which is causing fast fracture as opposed to stable crack growth. The results indicate that these techniques do not currently produce an indication of damage that significantly exceeds experimental scatter.
Laboratory test of a polarimetry imaging subtraction system for the high-contrast imaging
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Li, Rong
2012-09-01
We propose a polarimetry imaging subtraction test system that can be used for the direct imaging of the reflected light from exoplanets. Such a system will be able to remove the speckle noise scattered by the wave-front error and thus can enhance the high-contrast imaging. In this system, we use a Wollaston Prism (WP) to divide the incoming light into two simultaneous images with perpendicular linear polarizations. One of the images is used as the reference image. Then both the phase and geometric distortion corrections have been performed on the other image. The corrected image is subtracted with the reference image to remove the speckles. The whole procedure is based on an optimization algorithm and the target function is to minimize the residual speckles after subtraction. For demonstration purpose, here we only use a circular pupil in the test without integrating of our apodized-pupil coronagraph. It is shown that best result can be gained by inducing both phase and distortion corrections. Finally, it has reached an extra contrast gain of 50-times improvement in average, which is promising to be used for the direct imaging of exoplanets.
Robust autofocus algorithm for ISAR imaging of moving targets
NASA Astrophysics Data System (ADS)
Li, Jian; Wu, Renbiao; Chen, Victor C.
2000-08-01
A robust autofocus approach, referred to as AUTOCLEAN (AUTOfocus via CLEAN), is proposed for the motion compensation in ISAR (inverse synthetic aperture radar) imaging of moving targets. It is a parametric algorithm based on a very flexible data model which takes into account arbitrary range migration and arbitrary phase errors across the synthetic aperture that may be induced by unwanted radial motion of the target as well as propagation or system instability. AUTOCLEAN can be classified as a multiple scatterer algorithm (MSA), but it differs considerably from other existing MSAs in several aspects: (1) dominant scatterers are selected automatically in the two-dimensional (2-D) image domain; (2) scatterers may not be well-isolated or very dominant; (3) phase and RCS (radar cross section) information from each selected scatterer are combined in an optimal way; (4) the troublesome phase unwrapping step is avoided. AUTOCLEAN is computationally efficient and involves only a sequence of FFTs (fast Fourier Transforms). Another good feature associated with AUTOCLEAN is that its performance can be progressively improved by assuming a larger number of dominant scatterers for the target. Hence it can be easily configured for real-time applications including, for example, ATR (automatic target recognition) of non-cooperative moving targets, and for some other applications where the image quality is of the major concern but not the computational time including, for example, for the development and maintenance of low observable aircrafts. Numerical and experimental results have shown that AUTOCLEAN is a very robust autofocus tool for ISAR imaging.
NASA Astrophysics Data System (ADS)
Medgyesi-Mitschang, L. N.; Putnam, J. M.
1980-04-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.
Orthogonal vector algorithm to obtain the solar vector using the single-scattering Rayleigh model.
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Shi, Chao
2018-02-01
Information obtained from a polarization pattern in the sky provides many animals like insects and birds with vital long-distance navigation cues. The solar vector can be derived from the polarization pattern using the single-scattering Rayleigh model. In this paper, an orthogonal vector algorithm, which utilizes the redundancy of the single-scattering Rayleigh model, is proposed. We use the intersection angles between the polarization vectors as the main criteria in our algorithm. The assumption that all polarization vectors can be considered coplanar is used to simplify the three-dimensional (3D) problem with respect to the polarization vectors in our simulation. The surface-normal vector of the plane, which is determined by the polarization vectors after translation, represents the solar vector. Unfortunately, the two-directionality of the polarization vectors makes the resulting solar vector ambiguous. One important result of this study is, however, that this apparent disadvantage has no effect on the complexity of the algorithm. Furthermore, two other universal least-squares algorithms were investigated and compared. A device was then constructed, which consists of five polarized-light sensors as well as a 3D attitude sensor. Both the simulation and experimental data indicate that the orthogonal vector algorithms, if used with a suitable threshold, perform equally well or better than the other two algorithms. Our experimental data reveal that if the intersection angles between the polarization vectors are close to 90°, the solar-vector angle deviations are small. The data also support the assumption of coplanarity. During the 51 min experiment, the mean of the measured solar-vector angle deviations was about 0.242°, as predicted by our theoretical model.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong
2018-01-01
An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.
NASA Astrophysics Data System (ADS)
Burov, V. A.; Morozov, S. A.
2001-11-01
Wave scattering by a point-like inhomogeneity, i.e., a strong inhomogeneity with infinitesimal dimensions, is described. This type of inhomogeneity model is used in investigating the point-spread functions of different algorithms and systems. Two approaches are used to derive the rigorous relationship between the amplitude and phase of a signal scattered by a point-like acoustic inhomogeneity. The first approach is based on a Marchenko-type equation. The second approach uses the scattering by a scatterer whose size decreases simultaneously with an increase in its contrast. It is shown that the retarded and advanced waves are scattered differently despite the relationship between the phases of the corresponding scattered waves.
Merrill, Frank E.; Morris, Christopher
2005-05-17
A system capable of performing radiography using a beam of electrons. Diffuser means receive a beam of electrons and diffuse the electrons before they enter first matching quadrupoles where the diffused electrons are focused prior to the diffused electrons entering an object. First imaging quadrupoles receive the focused diffused electrons after the focused diffused electrons have been scattered by the object for focusing the scattered electrons. Collimator means receive the scattered electrons and remove scattered electrons that have scattered to large angles. Second imaging quadrupoles receive the collimated scattered electrons and refocus the collimated scattered electrons and map the focused collimated scattered electrons to transverse locations on an image plane representative of the electrons' positions in the object.
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
Optimal Link Removal for Epidemic Mitigation: A Two-Way Partitioning Approach
Enns, Eva A.; Mounzer, Jeffrey J.; Brandeau, Margaret L.
2011-01-01
The structure of the contact network through which a disease spreads may influence the optimal use of resources for epidemic control. In this work, we explore how to minimize the spread of infection via quarantining with limited resources. In particular, we examine which links should be removed from the contact network, given a constraint on the number of removable links, such that the number of nodes which are no longer at risk for infection is maximized. We show how this problem can be posed as a non-convex quadratically constrained quadratic program (QCQP), and we use this formulation to derive a link removal algorithm. The performance of our QCQP-based algorithm is validated on small Erdős-Renyi and small-world random graphs, and then tested on larger, more realistic networks, including a real-world network of injection drug use. We show that our approach achieves near optimal performance and out-perform so ther intuitive link removal algorithms, such as removing links in order of edge centrality. PMID:22115862
Scattering Models and Basic Experiments in the Microwave Regime
NASA Technical Reports Server (NTRS)
Fung, A. K.; Blanchard, A. J. (Principal Investigator)
1985-01-01
The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.
NASA Astrophysics Data System (ADS)
Liu, Yan; Shen, Yuecheng; Ruan, Haowen; Brodie, Frank L.; Wong, Terence T. W.; Yang, Changhuei; Wang, Lihong V.
2018-01-01
Normal development of the visual system in infants relies on clear images being projected onto the retina, which can be disrupted by lens opacity caused by congenital cataract. This disruption, if uncorrected in early life, results in amblyopia (permanently decreased vision even after removal of the cataract). Doctors are able to prevent amblyopia by removing the cataract during the first several weeks of life, but this surgery risks a host of complications, which can be equally visually disabling. Here, we investigated the feasibility of focusing light noninvasively through highly scattering cataractous lenses to stimulate the retina, thereby preventing amblyopia. This approach would allow the cataractous lens removal surgery to be delayed and hence greatly reduce the risk of complications from early surgery. Employing a wavefront shaping technique named time-reversed ultrasonically encoded optical focusing in reflection mode, we focused 532-nm light through a highly scattering ex vivo adult human cataractous lens. This work demonstrates a potential clinical application of wavefront shaping techniques.
Analytic algorithms for determining radiative transfer optical properties of ocean waters.
Kaskas, Ayse; Güleçyüz, Mustafa C; Tezcan, Cevdet; McCormick, Norman J
2006-10-10
A synthetic model for the scattering phase function is used to develop simple algebraic equations, valid for any water type, for evaluating the ratio of the backscattering to absorption coefficients of spatially uniform, very deep waters with data from upward and downward planar irradiances and the remotely sensed reflectance. The phase function is a variable combination of a forward-directed Dirac delta function plus isotropic scattering, which is an elementary model for strongly forward scattering such as that encountered in oceanic optics applications. The incident illumination at the surface is taken to be diffuse plus a collimated beam. The algorithms are compared with other analytic correlations that were previously derived from extensive numerical simulations, and they are also numerically tested with forward problem results computed with a modified FN method.
Validation of TOMS Aerosol Products using AERONET Observations
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Torres, O.; Sinyuk, A.; Holben, B.
2002-01-01
The Total Ozone Mapping Spectrometer (TOMS) aerosol algorithm uses measurements of radiances at two near UV channels in the range 331-380 nm to derive aerosol optical depth and single scattering albedo. Because of the low near UV surface albedo of all terrestrial surfaces (between 0.02 and 0.08), the TOMS algorithm has the capability of retrieving aerosol properties over the oceans and the continents. The Aerosol Robotic Network (AERONET) routinely derives spectral aerosol optical depth and single scattering albedo at a large number of sites around the globe. We have performed comparisons of both aerosol optical depth and single scattering albedo derived from TOMS and AERONET. In general, the TOMS aerosol products agree well with the ground-based observations, Results of this validation will be discussed.
Polarimetric ISAR: Simulation and image reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, somore » the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.« less
Automatic Detection of Steganographic Content
2005-06-30
Practically, it is mostly embedded into the media files, especially the image files. Consequently, a lot of the anti- steganography algorithms work with raw...1: not enough memory * -2: error running the removal algorithm EXPORT IMAGE *StegRemove( IMAGE * image , int *error); 2.8 Steganography Extraction API...researcher just invented a reliable algorithm that can detect the existence of a steganography if it is embedded anywhere in any uncompressed image . The
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study. Copyright © 2015 Elsevier B.V. All rights reserved.
Improved OSIRIS NO2 retrieval algorithm: description and validation
NASA Astrophysics Data System (ADS)
Sioris, Christopher E.; Rieger, Landon A.; Lloyd, Nicholas D.; Bourassa, Adam E.; Roth, Chris Z.; Degenstein, Douglas A.; Camy-Peyret, Claude; Pfeilsticker, Klaus; Berthet, Gwenaël; Catoire, Valéry; Goutail, Florence; Pommereau, Jean-Pierre; McLinden, Chris A.
2017-03-01
A new retrieval algorithm for OSIRIS (Optical Spectrograph and Infrared Imager System) nitrogen dioxide (NO2) profiles is described and validated. The algorithm relies on spectral fitting to obtain slant column densities of NO2, followed by inversion using an algebraic reconstruction technique and the SaskTran spherical radiative transfer model (RTM) to obtain vertical profiles of local number density. The validation covers different latitudes (tropical to polar), years (2002-2012), all seasons (winter, spring, summer, and autumn), different concentrations of nitrogen dioxide (from denoxified polar vortex to polar summer), a range of solar zenith angles (68.6-90.5°), and altitudes between 10.5 and 39 km, thereby covering the full retrieval range of a typical OSIRIS NO2 profile. The use of a larger spectral fitting window than used in previous retrievals reduces retrieval uncertainties and the scatter in the retrieved profiles due to noisy radiances. Improvements are also demonstrated through the validation in terms of bias reduction at 15-17 km relative to the OSIRIS operational v3.0 algorithm. The diurnal variation of NO2 along the line of sight is included in a fully spherical multiple scattering RTM for the first time. Using this forward model with built-in photochemistry, the scatter of the differences relative to the correlative balloon NO2 profile data is reduced.
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
Atmospheric correction for hyperspectral ocean color sensors
NASA Astrophysics Data System (ADS)
Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.
2017-12-01
NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.
Ross, Vincent; Dion, Denis; St-Germain, Daniel
2012-05-01
Radiometric images taken in mid-wave and long-wave infrared bands are used as a basis for validating a sea surface bidirectional reflectance distribution function (BRDF) being implemented into MODTRAN 5 (Berk et al. [Proc. SPIE5806, 662 (2005)]). The images were obtained during the MIRAMER campaign that took place in May 2008 in the Mediterranean Sea near Toulon, France. When atmosphere radiances are matched at the horizon to remove possible calibration offsets, the implementation of the BRDF in MODTRAN produces good sea surface radiance agreement, usually within 2% and at worst 4% from off-glint azimuthally averaged measurements. Simulations also compare quite favorably to glint measurements. The observed sea radiance deviations between model and measurements are not systematic, and are well within expected experimental uncertainties. This is largely attributed to proper radiative coupling between the surface and the atmosphere implemented using the DISORT multiple scattering algorithm.
NASA Astrophysics Data System (ADS)
Alvarez, César I.; Teodoro, Ana; Tierra, Alfonso
2017-10-01
Thin clouds in the optical remote sensing data are frequent and in most of the cases don't allow to have a pure surface data in order to calculate some indexes as Normalized Difference Vegetation Index (NDVI). This paper aims to evaluate the Automatic Cloud Removal Method (ACRM) algorithm over a high elevation city like Quito (Ecuador), with an altitude of 2800 meters above sea level, where the clouds are presented all the year. The ACRM is an algorithm that considers a linear regression between each Landsat 8 OLI band and the Cirrus band using the slope obtained with the linear regression established. This algorithm was employed without any reference image or mask to try to remove the clouds. The results of the application of the ACRM algorithm over Quito didn't show a good performance. Therefore, was considered improving this algorithm using a different slope value data (ACMR Improved). After, the NDVI computation was compared with a reference NDVI MODIS data (MOD13Q1). The ACMR Improved algorithm had a successful result when compared with the original ACRM algorithm. In the future, this Improved ACRM algorithm needs to be tested in different regions of the world with different conditions to evaluate if the algorithm works successfully for all conditions.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Guo, L-X; Li, J; Zeng, H
2009-11-01
We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.
EEG artifact removal-state-of-the-art and guidelines.
Urigüen, Jose Antonio; Garcia-Zapirain, Begoña
2015-06-01
This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area.
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Theory and Application of Auger and Photoelectron Diffraction and Holography
NASA Astrophysics Data System (ADS)
Chen, Xiang
This dissertation addresses the theories and applications of three important surface analysis techniques: Auger electron diffraction (AED), x-ray photoelectron diffraction (XPD), and Auger and photoelectron holography. A full multiple-scattering scheme for the calculations of XPD, AED, and Kikuchi electron diffraction pattern from a surface cluster is described. It is used to simulate 64 eV M_{2,3}VV and 913 eV L_3VV AED patterns from Cu(001) surfaces, in order to test assertions in the literature that they are explicable by a classical "blocking" and channeling model. We find that this contention is not valid, and that only a quantum mechanical multiple-scattering calculation is able to simulate these patterns well. The same multiple scattering simulation scheme is also used to investigate the anomalous phenomena of peak shifts off the forward-scattering directions in photo -electron diffraction patterns of Mg KLL (1180 eV) and O 1s (955 eV) from MgO(001) surfaces. These shifts are explained by calculations assuming a short electron mean free path. Similar simulations of XPD from a CoSi_2(111) surface for Co-3p and Si-2p normal emission agree well with experimental diffraction patterns. A filtering process aimed at eliminating the self -interference effect in photoelectron holography is developed. A better reconstructed image from Si-2p XPD from a Si(001) (2 times 1) surface is seen at atomic resolution. A reconstruction algorithm which corrects for the anisotropic emitter waves as well as the anisotropic atomic scattering factors is used for holographic reconstruction from a Co-3p XPD pattern from a CoSi_2 surface. This new algorithm considerably improves the reconstructed image. Finally, a new reconstruction algorithm called "atomic position recovery by iterative optimization of reconstructed intensities" (APRIORI), which takes account of the self-interference terms omitted by the other holographic algorithms, is developed. Tests on a Ni-C-O chain and Si(111)(sqrt{3} times sqrt{3})B surface suggest that this new method may overcome the twin image problem in the traditional holographic methods, reduce the artifacts in real space, and even separately identify the chemical species of the scatterers.
Characterization of Surface Reflectance Variation Effects on Remote Sensing
NASA Technical Reports Server (NTRS)
Pearce, W. A.
1984-01-01
The use of Monte Carlo radiative transfer codes to simulate the effects on remote sensing in visible and infrared wavelengths of variables which affect classification is examined. These variables include detector viewing angle, atmospheric aerosol size distribution, aerosol vertical and horizontal distribution (e.g., finite clouds), the form of the bidirectional ground reflectance function, and horizontal variability of reflectance type and reflectivity (albedo). These simulations are used to characterize the sensitivity of observables (intensity and polarization) to variations in the underlying physical parameters both to improve algorithms for the removal of atmospheric effects and to identify techniques which can improve classification accuracy. It was necessary to revise and validate the simulation codes (CTRANS, ARTRAN, and the Mie scattering code) to improve efficiency and accommodate a new operational environment, and to build the basic software tools for acquisition and off-line manipulation of simulation results. Initial calculations compare cases in which increasing amounts of aerosol are shifted into the stratosphere, maintaining a constant optical depth. In the case of moderate aerosol optical depth, the effect on the spread function is to scale it linearly as would be expected from a single scattering model. Varying the viewing angle appears to provide the same qualitative effect as modifying the vertical optical depth (for Lambertian ground reflectance).
Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.
Tam, W G; Zardecki, A
1982-07-01
Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.
A Fully Customized Baseline Removal Framework for Spectroscopic Applications.
Giguere, Stephen; Boucher, Thomas; Carey, C J; Mahadevan, Sridhar; Dyar, M Darby
2017-07-01
The task of proper baseline or continuum removal is common to nearly all types of spectroscopy. Its goal is to remove any portion of a signal that is irrelevant to features of interest while preserving any predictive information. Despite the importance of baseline removal, median or guessed default parameters are commonly employed, often using commercially available software supplied with instruments. Several published baseline removal algorithms have been shown to be useful for particular spectroscopic applications but their generalizability is ambiguous. The new Custom Baseline Removal (Custom BLR) method presented here generalizes the problem of baseline removal by combining operations from previously proposed methods to synthesize new correction algorithms. It creates novel methods for each technique, application, and training set, discovering new algorithms that maximize the predictive accuracy of the resulting spectroscopic models. In most cases, these learned methods either match or improve on the performance of the best alternative. Examples of these advantages are shown for three different scenarios: quantification of components in near-infrared spectra of corn and laser-induced breakdown spectroscopy data of rocks, and classification/matching of minerals using Raman spectroscopy. Software to implement this optimization is available from the authors. By removing subjectivity from this commonly encountered task, Custom BLR is a significant step toward completely automatic and general baseline removal in spectroscopic and other applications.
Mannan, Malik M. Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M. Ahmad
2016-01-01
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276
A model-based scatter artifacts correction for cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Wei; Zhu, Jun; Wang, Luyao
2016-04-15
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less
Fluorescence lifetime measurements in heterogeneous scattering medium
NASA Astrophysics Data System (ADS)
Nishimura, Goro; Awasthi, Kamlesh; Furukawa, Daisuke
2016-07-01
Fluorescence lifetime in heterogeneous multiple light scattering systems is analyzed by an algorithm without solving the diffusion or radiative transfer equations. The algorithm assumes that the optical properties of medium are constant in the excitation and emission wavelength regions. If the assumption is correct and the fluorophore is a single species, the fluorescence lifetime can be determined by a set of measurements of temporal point-spread function of the excitation light and fluorescence at two different concentrations of the fluorophore. This method is not dependent on the heterogeneity of the optical properties of the medium as well as the geometry of the excitation-detection on an arbitrary shape of the sample. The algorithm was validated by an indocyanine green fluorescence in phantom measurements and demonstrated by an in vivo measurement.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
A microfluidic laser scattering sensor for label-free detection of waterborne pathogens
NASA Astrophysics Data System (ADS)
Wei, Huang; Yang, Limei; Li, Feng
2016-10-01
A microfluidic-based multi-angle laser scattering (MALS) sensor capable of acquiring scattering pattern of single particle is demonstrated. The size and relative refractive index (RI) of polystyrene (PS) microspheres were deduced with accuracies of 60 nm and 0.001 by analyzing the scattering patterns. We measured scattering patterns of waterborne parasites i.e., cryptosporidium parvum (c.parvum) and giardia lamblia (g.lamblia), and some other representative species in 1 L water within 1 hour, and the waterborne parasites were identified with accuracy better than 96% by classification of distinctive scattering patterns with a support-vector-machine (SVM) algorithm. The system provides a promising tool for label-free and rapid detection of waterborne parasites.
Lining seam elimination algorithm and surface crack detection in concrete tunnel lining
NASA Astrophysics Data System (ADS)
Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling
2016-11-01
Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Auto-recognition of surfaces and auto-generation of material removal volume for finishing process
NASA Astrophysics Data System (ADS)
Kataraki, Pramod S.; Salman Abu Mansor, Mohd
2018-03-01
Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.
NASA Astrophysics Data System (ADS)
Kazantsev, I. G.; Olsen, U. L.; Poulsen, H. F.; Hansen, P. C.
2018-02-01
We investigate the idealized mathematical model of single scatter in PET for a detector system possessing excellent energy resolution. The model has the form of integral transforms estimating the distribution of photons undergoing a single Compton scattering with a certain angle. The total single scatter is interpreted as the volume integral over scatter points that constitute a rotation body with a football shape, while single scattering with a certain angle is evaluated as the surface integral over the boundary of the rotation body. The equations for total and sample single scatter calculations are derived using a single scatter simulation approximation. We show that the three-dimensional slice-by-slice filtered backprojection algorithm is applicable for scatter data inversion provided that the attenuation map is assumed to be constant. The results of the numerical experiments are presented.
Quantum hydrodynamics: capturing a reactive scattering resonance.
Derrickson, Sean W; Bittner, Eric R; Kendrick, Brian K
2005-08-01
The hydrodynamic equations of motion associated with the de Broglie-Bohm formulation of quantum mechanics are solved using a meshless method based upon a moving least-squares approach. An arbitrary Lagrangian-Eulerian frame of reference and a regridding algorithm which adds and deletes computational points are used to maintain a uniform and nearly constant interparticle spacing. The methodology also uses averaged fields to maintain unitary time evolution. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. A new and more robust artificial viscosity algorithm is presented which gives accurate scattering results and is capable of capturing quantum resonances. The methodology is applied to a one-dimensional model chemical reaction that is known to exhibit a quantum resonance. The correlation function approach is used to compute the reactive scattering matrix, reaction probability, and time delay as a function of energy. Excellent agreement is obtained between the scattering results based upon the quantum hydrodynamic approach and those based upon standard quantum mechanics. This is the first clear demonstration of the ability of moving grid approaches to accurately and robustly reproduce resonance structures in a scattering system.
Light scattering from normal and cervical cancer cells.
Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhou, Yong
2017-04-20
The light scattering characteristic plays a very important role in optic imaging and diagnostic applications. For optical detection of the cell, cell scattering characteristics have an extremely vital role. In this paper, we use the finite-difference time-domain (FDTD) algorithm to simulate the propagation and scattering of light in biological cells. The two-dimensional scattering cell models were set up based on the FDTD algorithm. The cell models of normal cells and cancerous cells were established, and the shapes of organelles, such as mitochondria, were elliptical. Based on these models, three aspects of the scattering characteristics were studied. First, the radar cross section (RCS) distribution curves of the corresponding cell models were calculated, then corresponding relationships between the size and the refractive index of the nucleus and light scattering information were analyzed in the three periods of cell canceration. The values of RCS increase positively with the increase of the nucleo-cytoplasmic ratio in the cancerous process when the scattering angle ranges from 0° to 20°. Second, the effect of organelles in the scattering was analyzed. The peak value of the RCS of cells with mitochondria is higher than the cells without mitochondria when the scattering angle ranges from 20° to 180°. Third, we demonstrated that the influence of cell shape is important, and the impact was revealed by the two typical ideal cells: round cells and oval cells. When the scattering angle ranges from 0° to 80°, the peak values and the frequencies of the appearance of the peaks from the two models are roughly similar. It can be concluded that: (1) the size of the nuclei and the change of the refractive index of cells have a certain impact on light scattering information of the whole cell; (2) mitochondria and other small organelles contribute to the cell light scattering characteristics in the larger scattering angle area; and (3) the change of the cell shape significantly influences the value of scattering peak and the deviation of scattering peak position. The results of the numerical simulation will guide subsequent experiments and early diagnosis of cervical cancer.
Magnetic resonance image restoration via dictionary learning under spatially adaptive constraints.
Wang, Shanshan; Xia, Yong; Dong, Pei; Feng, David Dagan; Luo, Jianhua; Huang, Qiu
2013-01-01
This paper proposes a spatially adaptive constrained dictionary learning (SAC-DL) algorithm for Rician noise removal in magnitude magnetic resonance (MR) images. This algorithm explores both the strength of dictionary learning to preserve image structures and the robustness of local variance estimation to remove signal-dependent Rician noise. The magnitude image is first separated into a number of partly overlapping image patches. The statistics of each patch are collected and analyzed to obtain a local noise variance. To better adapt to Rician noise, a correction factor is formulated with the local signal-to-noise ratio (SNR). Finally, the trained dictionary is used to denoise each image patch under spatially adaptive constraints. The proposed algorithm has been compared to the popular nonlocal means (NLM) filtering and unbiased NLM (UNLM) algorithm on simulated T1-weighted, T2-weighted and PD-weighted MR images. Our results suggest that the SAC-DL algorithm preserves more image structures while effectively removing the noise than NLM and it is also superior to UNLM at low noise levels.
Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video
Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan
2017-01-01
Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515
[Steam and air co-injection in removing TCE in 2D-sand box].
Wang, Ning; Peng, Sheng; Chen, Jia-Jun
2014-07-01
Steam and air co-injection is a newly developed and promising soil remediation technique for non-aqueous phase liquids (NAPLs) in vadose zone. In this study, in order to investigate the mechanism of the remediation process, trichloroethylene (TCE) removal using steam and air co-injection was carried out in a 2-dimensional sandbox with different layered sand structures. The results showed that co-injection perfectly improved the "tailing" effect compared to soil vapor extraction (SVE), and the remediation process of steam and air co-injection could be divided into SVE stage, steam strengthening stage and heat penetration stage. Removal ratio of the experiment with scattered contaminant area was higher and removal speed was faster. The removal ratios from the two experiments were 93.5% and 88.2%, and the removal periods were 83.9 min and 90.6 min, respectively. Steam strengthened the heat penetration stage. The temperature transition region was wider in the scattered NAPLs distribution experiment, which reduced the accumulation of TCE. Slight downward movement of TCE was observed in the experiment with TCE initially distributed in a fine sand zone. And such downward movement of TCE reduced the TCE removal ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M
Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering technique provides a means for assessing the structural and/or physical properties of apples. It could thus be useful for detection of apple mealiness, which is a symptom of physiological disorder, resulting in an undesirable texture and taste for apples and degrading their ...
Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation
NASA Technical Reports Server (NTRS)
Ferriday, James G.; Avery, Susan K.
1994-01-01
A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.
Disk-integrated reflection light curves of planets
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.
2014-03-01
The light scattered by a planet atmosphere contains valuable information on the planet's composition and aerosol content. Typically, the interpretation of that information requires elaborate radiative transport models accounting for the absorption and scattering processes undergone by the star photons on their passage through the atmosphere. I have been working on a particular family of algorithms based on Backward Monte Carlo (BMC) integration for solving the multiple-scattering problem in atmospheric media. BMC algorithms simulate statistically the photon trajectories in the reverse order that they actually occur, i.e. they trace the photons from the detector through the atmospheric medium and onwards to the illumination source following probability laws dictated by the medium's optical properties. BMC algorithms are versatile, as they can handle diverse viewing and illumination geometries, and can readily accommodate various physical phenomena. As will be shown, BMC algorithms are very well suited for the prediction of magnitudes integrated over a planet's disk (whether uniform or not). Disk-integrated magnitudes are relevant in the current context of exploration of extrasolar planets because spatial resolution of these objects will not be technologically feasible in the near future. I have been working on various predictions for the disk-integrated properties of planets that demonstrate the capacities of the BMC algorithm. These cases include the variability of the Earth's integrated signal caused by diurnal and seasonal changes in the surface reflectance and cloudiness, or by sporadic injection of large amounts of volcanic particles into the atmosphere. Since the implemented BMC algorithm includes a polarization mode, these examples also serve to illustrate the potential of polarimetry in the characterization of both Solar System and extrasolar planets. The work is complemented with the analysis of disk-integrated photometric observations of Earth and Venus drawn from various sources.
A comparative study of AGN feedback algorithms
NASA Astrophysics Data System (ADS)
Wurster, J.; Thacker, R. J.
2013-05-01
Modelling active galactic nuclei (AGN) feedback in numerical simulations is both technically and theoretically challenging, with numerous approaches having been published in the literature. We present a study of five distinct approaches to modelling AGN feedback within gravitohydrodynamic simulations of major mergers of Milky Way-sized galaxies. To constrain differences to only be between AGN feedback models, all simulations start from the same initial conditions and use the same star formation algorithm. Most AGN feedback algorithms have five key aspects: the black hole accretion rate, energy feedback rate and method, particle accretion algorithm, black hole advection algorithm and black hole merger algorithm. All models follow different accretion histories, and in some cases, accretion rates differ by up to three orders of magnitude at any given time. We consider models with either thermal or kinetic feedback, with the associated energy deposited locally around the black hole. Each feedback algorithm modifies the region around the black hole to different extents, yielding gas densities and temperatures within r ˜ 200 pc that differ by up to six orders of magnitude at any given time. The particle accretion algorithms usually maintain good agreement between the total mass accreted by dot{M} dt and the total mass of gas particles removed from the simulation, although not all algorithms guarantee this to be true. The black hole advection algorithms dampen inappropriate dragging of the black holes by two-body interactions. Advecting the black hole a limited distance based upon local mass distributions has many desirably properties, such as avoiding large artificial jumps and allowing the possibility of the black hole remaining in a gas void. Lastly, two black holes instantly merge when given criteria are met, and we find a range of merger times for different criteria. This is important since the AGN feedback rate changes across the merger in a way that is dependent on the specific accretion algorithm used. Using the MBH-σ relation as a diagnostic of the remnants yields three models that lie within the one-sigma scatter of the observed relation and two that fall below the expected relation. The wide variation in accretion behaviours of the models reinforces the fact that there remains much to be learnt about the evolution of galactic nuclei.
Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Tanis, F. J.; Jain, S. C.
1984-01-01
Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.
An advanced algorithm for deformation estimation in non-urban areas
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-09-01
This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Ling-Ling; Hao, Hong-Xia
2014-01-01
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
Feasibility of Raman spectroscopy in vitro after 5-ALA-based fluorescence diagnosis in the bladder
NASA Astrophysics Data System (ADS)
Grimbergen, M. C. M.; van Swol, C. F. P.; van Moorselaar, R. J. A.; Mahadevan-Jansen, A.,; Stone, N.
2006-02-01
Photodynamic diagnosis (PDD) has become popular in bladder cancer detection. Several studies have however shown an increased false positive biopsies rate under PDD guidance compared to conventional cystoscopy. Raman spectroscopy is an optical technique that utilizes molecular specific, inelastic scattering of light photons to interrogate biological tissues, which can successfully differentiate epithelial neoplasia from normal tissue and inflammations in vitro. This investigation was performed to show the feasibility of NIR Raman spectroscopy in vitro on biopsies obtained under guidance of 5-ALA induced PPIX fluorescence imaging. Raman spectra of a PPIX solution was measured to obtain a characteristic signature for the photosensitzer without contributions from tissue constituents. Biopsies were obtained from patients with known bladder cancer instilled with 50ml, 5mg 5-ALA two hours prior to trans-urethral resection of tumor (TURT). Additional biopsies were obtained at a fluorescent and non-fluorescent area, snap-frozen in liquid nitrogen and stored at -80 °C. Each biopsy was thawed before measurements (10sec integration time) with a confocal Raman system (Renishaw Gloucestershire, UK). The 830 nm excitation (300mW) source is focused on the tissue by a 20X ultra-long-working-distance objective. Differences in fluorescence background between the two groups were removed by means of a special developed fluorescence subtraction algorithm. Raman spectra from ALA biopsies showed different fluorescence background which can be effectively removed by a fluorescence subtraction algorithm. This investigation shows that the interaction of the ALA induced PPIX with Raman spectroscopy in bladder samples. Combination of these techniques in-vivo may lead to a viable method of optical biopsies in bladder cancer detection.
Design of the algorithm of photons migration in the multilayer skin structure
NASA Astrophysics Data System (ADS)
Bulykina, Anastasiia B.; Ryzhova, Victoria A.; Korotaev, Valery V.; Samokhin, Nikita Y.
2017-06-01
Design of approaches and methods of the oncological diseases diagnostics has special significance. It allows determining any kind of tumors at early stages. The development of optical and laser technologies provided increase of a number of methods allowing making diagnostic studies of oncological diseases. A promising area of biomedical diagnostics is the development of automated nondestructive testing systems for the study of the skin polarizing properties based on backscattered radiation detection. Specification of the examined tissue polarizing properties allows studying of structural properties change influenced by various pathologies. Consequently, measurement and analysis of the polarizing properties of the scattered optical radiation for the development of methods for diagnosis and imaging of skin in vivo appear relevant. The purpose of this research is to design the algorithm of photons migration in the multilayer skin structure. In this research, the algorithm of photons migration in the multilayer skin structure was designed. It is based on the use of the Monte Carlo method. Implemented Monte Carlo method appears as a tracking the paths of photons experiencing random discrete direction changes before they are released from the analyzed area or decrease their intensity to negligible levels. Modeling algorithm consists of the medium and the source characteristics generation, a photon generating considering spatial coordinates of the polar and azimuthal angles, the photon weight reduction calculating due to specular and diffuse reflection, the photon mean free path definition, the photon motion direction angle definition as a result of random scattering with a Henyey-Greenstein phase function, the medium's absorption calculation. Biological tissue is modeled as a homogeneous scattering sheet characterized by absorption, a scattering and anisotropy coefficients.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.
Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.
NASA Astrophysics Data System (ADS)
Rodebaugh, Raymond Francis, Jr.
2000-11-01
In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average configuration or where there is a complex geometry containing both lowly and highly scattering heterogeneities. There is some hope of decreasing the sensitivity to the average configuration by including portions of the next term of the localized heterogeneities series.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F
2011-04-01
To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging
NASA Astrophysics Data System (ADS)
Watson, C. C.; Newport, D.; Casey, M. E.; deKemp, R. A.; Beanlands, R. S.; Schmand, M.
1997-02-01
Quantitative imaging of the human thorax poses one of the most difficult challenges for three-dimensional (3-D) (septaless) positron emission tomography (PET), due to the strong attenuation of the annihilation radiation and the large contribution of scattered photons to the data. In [/sup 18/F] fluorodeoxyglucose (FDG) studies of the heart with the patient's arms in the field of view, the contribution of scattered events can exceed 50% of the total detected coincidences. Accurate correction for this scatter component is necessary for meaningful quantitative image analysis and tracer kinetic modeling. For this reason, the authors have implemented a single-scatter simulation technique for scatter correction in positron volume imaging. Here, they describe this algorithm and present scatter correction results from human and chest phantom studies.
Retrieving the Height of Smoke and Dust Aerosols by Synergistic Use of Multiple Satellite Sensors
NASA Technical Reports Server (NTRS)
Lee, Jaehwa; Hsu, N. Christina; Bettenhausen, Corey; Sayer, Andrew M.; Seftor, Colin J.; Jeong, Myeong-Jae
2016-01-01
The Aerosol Single scattering albedo and Height Estimation (ASHE) algorithm was first introduced in Jeong and Hsu (2008) to provide aerosol layer height and single scattering albedo (SSA) for biomass burning smoke aerosols. By using multiple satellite sensors synergistically, ASHE can provide the height information over much broader areas than lidar observations alone. The complete ASHE algorithm uses aerosol data from MODIS or VIIRS, OMI or OMPS, and CALIOP. A simplified algorithm also exists that does not require CALIOP data as long as the SSA of the aerosol layer is provided by another source. Several updates have recently been made: inclusion of dust layers in the retrieval process, better determination of the input aerosol layer height from CALIOP, improvement in aerosol optical depth (AOD) for nonspherical dust, development of quality assurance (QA) procedure, etc.
Scanning wind-vector scatterometers with two pencil beams
NASA Technical Reports Server (NTRS)
Kirimoto, T.; Moore, R. K.
1984-01-01
A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
NASA Astrophysics Data System (ADS)
Dimitri, Lindsay A.; Longland, William S.; Vander Wall, Stephen B.
2017-11-01
Seed dispersal in Juniperus is generally attributed to frugivores that consume the berry-like female cones. Some juniper cones are fleshy and resinous such as those of western juniper (Juniperus occidentalis), while others are dry and leathery such as those of Utah juniper (J. osteosperma). Rodents have been recorded harvesting Juniperus seeds and cones but are mostly considered seed predators. Our study sought to determine if rodents play a role in dispersal of western and Utah juniper seeds. We documented rodent harvest of cones and seeds of the locally-occurring juniper species and the alternate (non-local) juniper species in removal experiments at a western juniper site in northeastern California and a Utah juniper site in western Nevada. Characteristics of western and Utah juniper cones appeared to influence removal, as cones from the local juniper species were preferred at both sites. Conversely, removal of local and non-local seeds was similar. Piñon mice (Peromyscus truei) were responsible for most removal of cones and seeds at both sites. We used radioactively labeled seeds to follow seed fate and found many of these seeds in scattered caches (western juniper: 415 seeds in 82 caches, 63.0% of seeds found; Utah juniper: 458 seeds in 127 caches, 39.5% of seeds found) most of which were attributed to piñon mice. We found little evidence of frugivores dispersing Utah juniper seeds, thus scatter-hoarding rodents appear to be the main dispersal agents. Western juniper cones were eaten by frugivores, and scatter-hoarding is a complimentary or secondary form of seed dispersal. Our results support the notion that Utah juniper has adapted to xeric environments by conserving water through the loss of fleshy fruits that attract frugivores and instead relies on scatter-hoarding rodents as effective dispersal agents.
NASA Astrophysics Data System (ADS)
Wiskin, James; Klock, John; Iuanow, Elaine; Borup, Dave T.; Terry, Robin; Malik, Bilal H.; Lenox, Mark
2017-03-01
There has been a great deal of research into ultrasound tomography for breast imaging over the past 35 years. Few successful attempts have been made to reconstruct high-resolution images using transmission ultrasound. To this end, advances have been made in 2D and 3D algorithms that utilize either time of arrival or full wave data to reconstruct images with high spatial and contrast resolution suitable for clinical interpretation. The highest resolution and quantitative accuracy result from inverse scattering applied to full wave data in 3D. However, this has been prohibitively computationally expensive, meaning that full inverse scattering ultrasound tomography has not been considered clinically viable. Here we show the results of applying a nonlinear inverse scattering algorithm to 3D data in a clinically useful time frame. This method yields Quantitative Transmission (QT) ultrasound images with high spatial and contrast resolution. We reconstruct sound speeds for various 2D and 3D phantoms and verify these values with independent measurements. The data are fully 3D as is the reconstruction algorithm, with no 2D approximations. We show that 2D reconstruction algorithms can introduce artifacts into the QT breast image which are avoided by using a full 3D algorithm and data. We show high resolution gross and microscopic anatomic correlations comparing cadaveric breast QT images with MRI to establish imaging capability and accuracy. Finally, we show reconstructions of data from volunteers, as well as an objective visual grading analysis to confirm clinical imaging capability and accuracy.
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
All-Dielectric Multilayer Cylindrical Structures for Invisibility Cloaking
Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.
2015-01-01
We study optical response of all-dielectric multilayer structures and demonstrate that the total scattering of such structures can be suppressed leading to optimal invisibility cloaking. We use experimental material data and a genetic algorithm to reduce the total scattering by adjusting the material and thickness of various layers for several types of dielectric cores at telecommunication wavelengths. Our approach demonstrates 80-fold suppression of the total scattering cross-section by employing just a few dielectric layers. PMID:25858295
2011-09-01
and Imaging Framework First, the parallelized 3-D FDTD algorithm is applied to simulate composite scattering from targets in a rough ground...solver as pertinent to forward-looking radar sensing , the effects of surface clutter on multistatic target imaging are illustrated with large-scale...Full-wave Characterization of Rough Terrain Surface Effects for Forward-looking Radar Applications: A Scattering and Imaging Study from the
A measurement of multi-jet rates in deep-inelastic scattering at HERA
NASA Astrophysics Data System (ADS)
Abt, I.; Ahmed, T.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Bärwolff, H.; Bán, J.; Baranov, P.; Barrelet, E.; Bartel, W.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bergstein, H.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Biddulph, P.; Binder, E.; Bischoff, A.; Bizot, J. C.; Blobel, V.; Borras, K.; Bosetti, P. C.; Boudry, V.; Bourdarios, C.; Brasse, F.; Braun, U.; Braunschweig, W.; Brisson, V.; Bruncko, D.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Clarke, D.; Clegg, A. B.; Colombo, M.; Coughlan, J. A.; Courau, A.; Coutures, Ch.; Cozzika, G.; Criegee, L.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Danilov, M.; Dann, A. W. E.; Dau, W. D.; David, M.; Deffur, E.; Delcourt, B.; Del Buono, L.; Devel, M.; de Roeck, A.; Dingus, P.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Drescher, A.; Duboc, J.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebbinghaus, R.; Eberle, M.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellis, N. N.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Evrard, E.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Fensome, I. F.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Flauger, W.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Fuhrmann, P.; Gabathuler, E.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gellrich, A.; Gennis, M.; Genzel, H.; Gerhards, R.; Godfrey, L.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goodall, A. M.; Gorelov, I.; Goritchev, P.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Greif, H.; Grindhammer, G.; Gruber, C.; Haack, J.; Haidt, D.; Hajduk, L.; Hamon, O.; Handschuh, D.; Hanlon, E. M.; Hapke, M.; Harjes, J.; Haydar, R.; Haynes, W. J.; Heatherington, J.; Hedberg, V.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herma, R.; Herynek, I.; Hildesheim, W.; Hill, P.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Huet, Ph.; Hufnagel, H.; Huot, N.; Ibbotson, M.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Jansen, T.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kasarian, S.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Kaufmann, H. H.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Köhler, T.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Krüger, U.; Kubenka, J. P.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Lacour, D.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Langkau, R.; Lanius, P.; Laporte, J. F.; Lebedev, A.; Leuschner, A.; Leverenz, C.; Levonian, S.; Lewin, D.; Ley, Ch.; Lindner, A.; Lindström, G.; Linsel, F.; Lipinski, J.; Loch, P.; Lohmander, H.; Lopez, G. C.; Lüers, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masson, S.; Mavroidis, A.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Mikocki, S.; Milone, V.; Monnier, E.; Moreau, F.; Moreels, J.; Morris, J. V.; Müller, K.; Murín, P.; Murray, S. A.; Nagovizin, V.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Niebergall, F.; Niebuhr, C.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Orenstein, S.; Ould-Saada, F.; Pascaud, C.; Patel, G. D.; Peppel, E.; Peters, S.; Phillips, H. T.; Phillips, J. P.; Pichler, Ch.; Pilgram, W.; Pitzl, D.; Prell, S.; Prosi, R.; Rädel, G.; Raupach, F.; Rauschnabel, K.; Reimer, P.; Reinshagen, S.; Ribarics, P.; Riech, V.; Riedlberger, J.; Riess, S.; Rietz, M.; Robertson, S. M.; Robmann, P.; Roosen, R.; Rostovtsev, A.; Royon, C.; Rudowicz, M.; Ruffer, M.; Rusakov, S.; Rybicki, K.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Savitsky, M.; Schacht, P.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmitz, W.; Schöning, A.; Schröder, V.; Schulz, M.; Schwab, B.; Schwind, A.; Scobel, W.; Seehausen, U.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Smolik, L.; Soloviev, Y.; Spitzer, H.; Staroba, P.; Steenbock, M.; Steffen, P.; Steinberg, R.; Stella, B.; Stephens, K.; Stier, J.; Stösslein, U.; Strachota, J.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Taylor, R. E.; Tchernyshov, V.; Thiebaux, C.; Thompson, G.; Tichomirov, I.; Truöl, P.; Turnau, J.; Tutas, J.; Urban, L.; Usik, A.; Valkar, S.; Valkarova, A.; Vallée, C.; van Esch, P.; Vartapetian, A.; Vazdik, Y.; Vecko, M.; Verrecchia, P.; Vick, R.; Villet, G.; Vogel, E.; Wacker, K.; Walker, I. W.; Walther, A.; Weber, G.; Wegener, D.; Wegner, A.; Wellisch, H. P.; West, L. R.; Willard, S.; Winde, M.; Winter, G.-G.; Wolff, Th.; Womersley, L. A.; Wright, A. E.; Wulff, N.; Yiou, T. P.; Žáček, J.; Závada, P.; Zeitnitz, C.; Ziaeepour, H.; Zimmer, M.; Zimmermann, W.; Zomer, F.
1994-03-01
Multi-jet production is observed in deep-inelastic electron proton scattering with the H1 detector at HERA. Jet rates for momentum transfers squared up to 500 GeV2 are determined using the JADE jet clustering algorithm. They are found to be in agreement with predictions from QCD based models.
Modeling light scattering in the shadow region behind thin cylinders for diameter analysis
NASA Astrophysics Data System (ADS)
Blohm, Werner
2018-03-01
In this paper, the scattered light intensities resulting in the shadow region at an observation plane behind monochromatically illuminated circular cylinders are modeled by sinusoidal sequences having a squared dependence on spatial position in the observation plane. Whereas two sinusoidal components appear to be sufficient for modeling the light distribution behind intransparent cylinders, at least three sinusoidal components are necessary for transparent cylinders. Based on this model, a novel evaluation algorithm for a very fast retrieval of the diameter of thin cylindrical products like metallic wires and transparent fibers is presented. This algorithm was tested in a cylinder diameter range typical for these products (d ≈ 70 … 150 μm; n ≈ 1.5). Numerical examples are given to illustrate its application by using both synthetic and experimental scattering data. Diameter accuracies below 0.05 μm could be achieved for intransparent cylinders in the tested diameter range. However, scattering effects due to morphological-dependent resonances (MDRs) are problematical in the diameter analysis of transparent products. In order to incorporate these effects into the model, further investigations are needed.
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
NASA Astrophysics Data System (ADS)
Yang, Jiamiao; Shen, Yuecheng; Liu, Yan; Hemphill, Ashton S.; Wang, Lihong V.
2017-11-01
Optical scattering prevents light from being focused through thick biological tissue at depths greater than ˜1 mm. To break this optical diffusion limit, digital optical phase conjugation (DOPC) based wavefront shaping techniques are being actively developed. Previous DOPC systems employed spatial light modulators that modulated either the phase or the amplitude of the conjugate light field. Here, we achieve optical focusing through scattering media by using polarization modulation based generalized DOPC. First, we describe an algorithm to extract the polarization map from the measured scattered field. Then, we validate the algorithm through numerical simulations and find that the focusing contrast achieved by polarization modulation is similar to that achieved by phase modulation. Finally, we build a system using an inexpensive twisted nematic liquid crystal based spatial light modulator (SLM) and experimentally demonstrate light focusing through 3-mm thick chicken breast tissue. Since the polarization modulation based SLMs are widely used in displays and are having more and more pixel counts with the prevalence of 4 K displays, these SLMs are inexpensive and valuable devices for wavefront shaping.
Optimizing coherent anti-Stokes Raman scattering by genetic algorithm controlled pulse shaping
NASA Astrophysics Data System (ADS)
Yang, Wenlong; Sokolov, Alexei
2010-10-01
The hybrid coherent anti-Stokes Raman scattering (CARS) has been successful applied to fast chemical sensitive detections. As the development of femto-second pulse shaping techniques, it is of great interest to find the optimum pulse shapes for CARS. The optimum pulse shapes should minimize the non-resonant four wave mixing (NRFWM) background and maximize the CARS signal. A genetic algorithm (GA) is developed to make a heuristic searching for optimized pulse shapes, which give the best signal the background ratio. The GA is shown to be able to rediscover the hybrid CARS scheme and find optimized pulse shapes for customized applications by itself.
Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.
Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y
1999-04-20
A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
NASA Astrophysics Data System (ADS)
Al-Asadi, H. A.
2013-02-01
We present a theoretical analysis of an additional nonlinear phase shift of backward Stokes wave based on stimulated Brillouin scattering in the system with a bi-directional pumping scheme. We optimize three parameters of the system: the numerical aperture, the optical loss and the pumping wavelength to minimize an additional nonlinear phase shift of backward Stokes waves due to stimulated Brillouin scattering. The optimization is performed with various Brillouin pump powers and the optical reflectivity values are based on the modern, global evolutionary computation algorithm, particle swarm optimization. It is shown that the additional nonlinear phase shift of backward Stokes wave varies with different optical fiber lengths, and can be minimized to less than 0.07 rad according to the particle swarm optimization algorithm for 5 km. The bi-directional pumping configuration system is shown to be efficient when it is possible to transmit the power output to advanced when frequency detuning is negative and delayed when it is positive, with the optimum values of the three parameters to achieve the reduction of an additional nonlinear phase shift.
Focusing light through strongly scattering media using genetic algorithm with SBR discriminant
NASA Astrophysics Data System (ADS)
Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun
2018-02-01
In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
NASA Astrophysics Data System (ADS)
Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.
2018-05-01
A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.
False colors removal on the YCr-Cb color space
NASA Astrophysics Data System (ADS)
Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe
2009-01-01
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.
NASA Astrophysics Data System (ADS)
Honeyager, Ryan
High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then examined. The loss of interior structure is found to have a negligible impact on scattering cross sections, and backscatter is lowered by approximately five percent. This establishes that detailed knowledge of interior structure is not necessary when modeling scattering behavior, and it also provides support for using an effective medium approximation to describe the interiors of snow aggregates. The Voronoi diagram-based technique enables the almost trivial determination of the effective density of this medium. A bounding neighbor algorithm is then used to establish a greatly improved approximation of scattering by equivalent spheroids. This algorithm is then used to posit a Voronoi diagram-based definition of effective density approach, which is used in concert with the T-matrix method to determine single-scattering cross sections. The resulting backscatters are found to reasonably match those of the DDA over frequencies from 10.65 to 183.31 GHz and particle sizes from a few hundred micrometers to nine millimeters in length. Integrated error in backscatter versus DDA is found to be within 25% at 94 GHz. Errors in scattering cross-sections and asymmetry parameters are likewise small. The observed cross-sectional errors are much smaller than the differences observed among different particle models. This represents a significant improvement over established techniques, and it demonstrates that the radiative properties of dense aggregate snowflakes may be adequately represented by equal-mass homogeneous spheroids. The present results can be used to supplement retrieval algorithms used by CloudSat, EarthCARE, Galileo, GPM and SWACR radars. The ability to predict the full range of scattering properties is potentially also useful for other particle regimes where a compact particle approximation is applicable.
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, X; Zhang, Z; Xie, Y
Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads tomore » the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation therapy. This work is supported by grants from Guangdong Innovative Research Team Program of China (Grant No. 2011S013), National 863 Programs of China (Grant Nos. 2012AA02A604 and 2015AA043203), the National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917)« less
Quasi-soliton scattering in quantum spin chains
NASA Astrophysics Data System (ADS)
Vlijm, R.; Ganahl, M.; Fioretto, D.; Brockmann, M.; Haque, M.; Evertz, H. G.; Caux, J.-S.
2015-12-01
The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time-evolution fits on the displacements. The time-evolved block decimation algorithm allows for the study of scattering displacements from spin-block states, showing similar scattering displacement features.
Quasi-soliton scattering in quantum spin chains
NASA Astrophysics Data System (ADS)
Fioretto, Davide; Vljim, Rogier; Ganahl, Martin; Brockmann, Michael; Haque, Masud; Evertz, Hans-Gerd; Caux, Jean-Sébastien
The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time evolution fits on the displacements. The TEBD algorithm allows for the study of scattering displacements from spin-block states, showing similar displacement scattering features.
Direction Finding in the Presence of Complex Electro-Magnetic Environment.
1995-06-29
compiling adversely affects the resolution capabilities of the MUSIC algorithm. A technique utilizing the terminal impedance matrix is devised to...performance of the MUSIC algorithm is also investigated.Interference power, as little as 15dB below the signal power from the near field scatterer greatly...reduces.the resolution capabilities of the MUSIC algorithm. A new away configuration is devised to suppress the interference. Modification of the MUSIC
A new algorithm for ECG interference removal from single channel EMG recording.
Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein
2017-09-01
This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams
NASA Astrophysics Data System (ADS)
Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.
2016-02-01
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
NASA Astrophysics Data System (ADS)
Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack
1995-12-01
Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.
Testing near-infrared spectrophotometry using a liquid neonatal head phantom
NASA Astrophysics Data System (ADS)
Wolf, Martin; Baenziger, Oskar; Keel, Matthias; Dietz, Vera; von Siebenthal, Kurt; Bucher, Hans U.
1998-12-01
We constructed a liquid phantom, which mimics the neonatal head for testing near infrared spectrophotometry instruments. It consists of a spherical, 3.5 mm thick layer of silicone rubber simulating skin and bone and acts as container for a liquid solution with IntralipidTM, 60 micrometers ol/l haemoglobin and yeast. The IntralipidTM concentration was varied to test the influence of scattering on haemoglobin concentrations and tissue oxygenation determined by the Critikon 2020. The solution was oxygenated using pure oxygen and then deoxygenated by the yeast. For the instruments algorithm, we found with increasing scattering (0.5%, 1%, 1.5% and 2% IntralipidTM concentration) an increasing offset added to the oxy- (56.7, 90.8, 112.5, 145.2 micrometers ol/l respectively) and deoxyhaemoglobin (25.4, 44.3, 58.5, 65.9 micrometers ol/l) concentration causing a decreasing range (41.3, 31.3, 25.0, 22.2%) of the tissue oxygen saturation reading. However, concentration changes were quantified correctly independently of the scattering level. For an other algorithm based on the analytical solution the offsets were smaller: oxyhaemoglobin 12.2, 34.0, 53.2, 88.8 micrometers ol/l and deoxyhaemoglobin 1.6, 11.2, 22.2, 28.1 micrometers ol/l. The range of the tissue oxygen saturation reading was higher: 71.3, 55.5, 45.7, 39.4%. However, concentration changes were not quantified correctly and depended on scattering. This study demonstrates the need to develop algorithms, which take into consideration the anatomical structures.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Sifting Through SDO's AIA Cosmic Ray Hits to Find Treasure
NASA Astrophysics Data System (ADS)
Kirk, M. S.; Thompson, B. J.; Viall, N. M.; Young, P. R.
2017-12-01
The Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO AIA) has revolutionized solar imaging with its high temporal and spatial resolution, unprecedented spatial and temporal coverage, and seven EUV channels. Automated algorithms routinely clean these images to remove cosmic ray intensity spikes as a part of its preprocessing algorithm. We take a novel approach to survey the entire set of AIA "spike" data to identify and group compact brightenings across the entire SDO mission. The AIA team applies a de-spiking algorithm to remove magnetospheric particle impacts on the CCD cameras, but it has been found that compact, intense solar brightenings are often removed as well. We use the spike database to mine the data and form statistics on compact solar brightenings without having to process large volumes of full-disk AIA data. There are approximately 3 trillion "spiked pixels" removed from images over the mission to date. We estimate that 0.001% of those are of solar origin and removed by mistake, giving us a pre-segmented dataset of 30 million events. We explore the implications of these statistics and the physical qualities of the "spikes" of solar origin.
Application of shift-and-add algorithms for imaging objects within biological media
NASA Astrophysics Data System (ADS)
Aizert, Avishai; Moshe, Tomer; Abookasis, David
2017-01-01
The Shift-and-Add (SAA) technique is a simple mathematical operation developed to reconstruct, at high spatial resolution, atmospherically degraded solar images obtained from stellar speckle interferometry systems. This method shifts and assembles individual degraded short-exposure images into a single average image with significantly improved contrast and detail. Since the inhomogeneous refractive indices of biological tissue causes light scattering similar to that induced by optical turbulence in the atmospheric layers, we assume that SAA methods can be successfully implemented to reconstruct the image of an object within a scattering biological medium. To test this hypothesis, five SAA algorithms were evaluated for reconstructing images acquired from multiple viewpoints. After successfully retrieving the hidden object's shape, quantitative image quality metrics were derived, enabling comparison of imaging error across a spectrum of layer thicknesses, demonstrating the relative efficacy of each SAA algorithm for biological imaging.
Zhang, T; Gordon, H R
1997-04-20
We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
Simulation study into the identification of nuclear materials in cargo containers using cosmic rays
NASA Astrophysics Data System (ADS)
Blackwell, T. B.; Kudryavtsev, V. A.
2015-04-01
Muon tomography represents a new type of imaging technique that can be used in detecting high-Z materials. Monte Carlo simulations for muon scattering in different types of target materials are presented. The dependence of the detector capability to identify high-Z targets on spatial resolution has been studied. Muon tracks are reconstructed using a basic point of closest approach (PoCA) algorithm. In this article we report the development of a secondary analysis algorithm that is applied to the reconstructed PoCA points. This algorithm efficiently ascertains clusters of voxels with high average scattering angles to identify `areas of interest' within the inspected volume. Using this approach the effect of other parameters, such as the distance between detectors and the number of detectors per set, on material identification is also presented. Finally, false positive and false negative rates for detecting shielded HEU in realistic scenarios with low-Z clutter are presented.
Improved Gaussian Beam-Scattering Algorithm
NASA Technical Reports Server (NTRS)
Lock, James A.
1995-01-01
The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.
SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.
Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G
2012-06-01
To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.
1995-01-01
In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.
Restoration for Noise Removal in Quantum Images
NASA Astrophysics Data System (ADS)
Liu, Kai; Zhang, Yi; Lu, Kai; Wang, Xiaoping
2017-09-01
Quantum computation has become increasingly attractive in the past few decades due to its extraordinary performance. As a result, some studies focusing on image representation and processing via quantum mechanics have been done. However, few of them have considered the quantum operations for images restoration. To address this problem, three noise removal algorithms are proposed in this paper based on the novel enhanced quantum representation model, oriented to two kinds of noise pollution (Salt-and-Pepper noise and Gaussian noise). For the first algorithm Q-Mean, it is designed to remove the Salt-and-Pepper noise. The noise points are extracted through comparisons with the adjacent pixel values, after which the restoration operation is finished by mean filtering. As for the second method Q-Gauss, a special mask is applied to weaken the Gaussian noise pollution. The third algorithm Q-Adapt is effective for the source image containing unknown noise. The type of noise can be judged through the quantum statistic operations for the color value of the whole image, and then different noise removal algorithms are used to conduct image restoration respectively. Performance analysis reveals that our methods can offer high restoration quality and achieve significant speedup through inherent parallelism of quantum computation.
A novel washing algorithm for underarm stain removal
NASA Astrophysics Data System (ADS)
Acikgoz Tufan, H.; Gocek, I.; Sahin, U. K.; Erdem, I.
2017-10-01
After contacting with human sweat which comprise around 27% sebum, anti-perspirants comprising aluminium chloride or its compounds form a jel-like structure whose solubility in water is very poor. In daily use, this jel-like structure closes sweat pores and hinders wetting of skin by sweat. However, when in contact with garments, they form yellowish stains at the underarm of the garments. These stains are very hard to remove with regular machine washing. In this study, first of all, we focused on understanding and simulating such stain formation on the garments. Two alternative procedures are offered to form jel-like structures. On both procedures, commercially available spray or deo-stick type anti-perspirants, standard acidic and basic sweat solutions and artificial sebum are used to form jel-like structures, and they are applied on fabric in order to get hard stains. Secondly, after simulation of the stain on the fabric, we put our efforts on developing a washing algorithm specifically designed for removal of underarm stains. Eight alternative washing algorithms are offered with varying washing temperature, amounts of detergent, and pre-stain removal procedures. Better algorithm is selected by comparison of Tristimulus Y values after washing.
Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni
2014-05-01
In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1-8, 2013
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
Effect of nanodiamond fluorination on the efficiency of quasispecular reflection of cold neutrons
NASA Astrophysics Data System (ADS)
Nesvizhevsky, V. V.; Dubois, M.; Gutfreund, Ph.; Lychagin, E. V.; Nezvanov, A. Yu.; Zhernenkov, K. N.
2018-02-01
Nanomaterials, which show large reflectivity for external radiation, are of general interest in science and technology. We report a result from our ongoing research on the reflection of low-energy neutrons from powders of detonation diamond nanoparticles. Our previous work showed a large probability for quasispecular reflection of neutrons from this medium. The model of neutron scattering from nanoparticles, which we have developed, suggests two ways to increase the quasispecular reflection probability: (1) the reduction of incoherent scattering by substitution of hydrogen with fluorine inside the nanoparticles, and (2) the sharpening of the neutron optical potential step by removal of amorphous s p2 carbon from the nanoparticle shells. We present experimental results on scattering of slow neutrons from both raw and fluorinated diamond nanoparticles with amorphous s p2 carbon removed by gas-solid fluorination. These results show a clear increase in quasispecular reflection probability.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets
2002-05-01
Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho; Xing Lei; Lee, Rena
2012-05-15
Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less
Noise removing in encrypted color images by statistical analysis
NASA Astrophysics Data System (ADS)
Islam, N.; Puech, W.
2012-03-01
Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.
Polarization reconstruction algorithm for a Compton polarimeter
NASA Astrophysics Data System (ADS)
Vockert, M.; Weber, G.; Spillmann, U.; Krings, T.; Stöhlker, Th
2018-05-01
We present the technique of Compton polarimetry using X-ray detectors based on double-sided segmented semiconductor crystals that were developed within the SPARC collaboration. In addition, we discuss the polarization reconstruction algorithm with particular emphasis on systematic deviations between the observed detector response and our model function for the Compton scattering distribution inside the detector.
An Alternative Retrieval Algorithm for the Ozone Mapping and Profiler Suite Limb Profiler
2012-05-01
behavior of aerosol extinction from the upper troposphere through the stratosphere is critical for retrieving ozone in this region. Aerosol scattering is......include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT An Alternative Retrieval Algorithm for the Ozone Mapping and
Rapid automated superposition of shapes and macromolecular models using spherical harmonics.
Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I
2016-06-01
A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.
A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search
Alba, Enrique; Leguizamón, Guillermo
2016-01-01
This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
NASA Technical Reports Server (NTRS)
Lee, Jaehwa; Hsu, N. Christina; Bettenhausen, Corey; Sayer, Andrew M.; Seftor, Colin J.; Jeong, Myeong-Jae
2015-01-01
Aerosol Single scattering albedo and Height Estimation (ASHE) algorithm was first introduced in Jeong and Hsu (2008) to provide aerosol layer height as well as single scattering albedo (SSA) for biomass burning smoke aerosols. One of the advantages of this algorithm was that the aerosol layer height can be retrieved over broad areas, which had not been available from lidar observations only. The algorithm utilized aerosol properties from three different satellite sensors, i.e., aerosol optical depth (AOD) and Ångström exponent (AE) from Moderate Resolution Imaging Spectroradiometer (MODIS), UV aerosol index (UVAI) from Ozone Monitoring Instrument (OMI), and aerosol layer height from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP). Here, we extend the application of the algorithm to Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) data. We also now include dust layers as well as smoke. Other updates include improvements in retrieving the AOD of nonspherical dust from VIIRS, better determination of the aerosol layer height from CALIOP, and more realistic input aerosol profiles in the forward model for better accuracy.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Angular description for 3D scattering centers
NASA Astrophysics Data System (ADS)
Bhalla, Rajan; Raynal, Ann Marie; Ling, Hao; Moore, John; Velten, Vincent J.
2006-05-01
The electromagnetic scattered field from an electrically large target can often be well modeled as if it is emanating from a discrete set of scattering centers (see Fig. 1). In the scattering center extraction tool we developed previously based on the shooting and bouncing ray technique, no correspondence is maintained amongst the 3D scattering center extracted at adjacent angles. In this paper we present a multi-dimensional clustering algorithm to track the angular and spatial behaviors of 3D scattering centers and group them into features. The extracted features for the Slicy and backhoe targets are presented. We also describe two metrics for measuring the angular persistence and spatial mobility of the 3D scattering centers that make up these features in order to gather insights into target physics and feature stability. We find that features that are most persistent are also the most mobile and discuss implications for optimal SAR imaging.
Martinez, G T; van den Bos, K H W; Alania, M; Nellist, P D; Van Aert, S
2018-04-01
In quantitative scanning transmission electron microscopy (STEM), scattering cross-sections have been shown to be very sensitive to the number of atoms in a column and its composition. They correspond to the integrated intensity over the atomic column and they outperform other measures. As compared to atomic column peak intensities, which saturate at a given thickness, scattering cross-sections increase monotonically. A study of the electron wave propagation is presented to explain the sensitivity of the scattering cross-sections. Based on the multislice algorithm, we analyse the wave propagation inside the crystal and its link to the scattered signal for the different probe positions contained in the scattering cross-section for detector collection in the low-, middle- and high-angle regimes. The influence to the signal from scattering of neighbouring columns is also discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gomez, Humberto
2016-06-01
The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.
Seven-parameter statistical model for BRDF in the UV band.
Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua
2012-05-21
A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Pérez-Arancibia, Carlos; Bruno, Oscar P
2014-08-01
This paper presents high-order integral equation methods for the evaluation of electromagnetic wave scattering by dielectric bumps and dielectric cavities on perfectly conducting or dielectric half-planes. In detail, the algorithms introduced in this paper apply to eight classical scattering problems, namely, scattering by a dielectric bump on a perfectly conducting or a dielectric half-plane, and scattering by a filled, overfilled, or void dielectric cavity on a perfectly conducting or a dielectric half-plane. In all cases field representations based on single-layer potentials for appropriately chosen Green functions are used. The numerical far fields and near fields exhibit excellent convergence as discretizations are refined-even at and around points where singular fields and infinite currents exist.
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.
Septal penetration correction in I-131 imaging following thyroid cancer treatment
NASA Astrophysics Data System (ADS)
Barrack, Fiona; Scuffham, James; McQuaid, Sarah
2018-04-01
Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.
Anastasiadou, Maria N; Christodoulakis, Manolis; Papathanasiou, Eleftherios S; Papacostas, Savvas S; Mitsis, Georgios D
2017-09-01
This paper proposes supervised and unsupervised algorithms for automatic muscle artifact detection and removal from long-term EEG recordings, which combine canonical correlation analysis (CCA) and wavelets with random forests (RF). The proposed algorithms first perform CCA and continuous wavelet transform of the canonical components to generate a number of features which include component autocorrelation values and wavelet coefficient magnitude values. A subset of the most important features is subsequently selected using RF and labelled observations (supervised case) or synthetic data constructed from the original observations (unsupervised case). The proposed algorithms are evaluated using realistic simulation data as well as 30min epochs of non-invasive EEG recordings obtained from ten patients with epilepsy. We assessed the performance of the proposed algorithms using classification performance and goodness-of-fit values for noisy and noise-free signal windows. In the simulation study, where the ground truth was known, the proposed algorithms yielded almost perfect performance. In the case of experimental data, where expert marking was performed, the results suggest that both the supervised and unsupervised algorithm versions were able to remove artifacts without affecting noise-free channels considerably, outperforming standard CCA, independent component analysis (ICA) and Lagged Auto-Mutual Information Clustering (LAMIC). The proposed algorithms achieved excellent performance for both simulation and experimental data. Importantly, for the first time to our knowledge, we were able to perform entirely unsupervised artifact removal, i.e. without using already marked noisy data segments, achieving performance that is comparable to the supervised case. Overall, the results suggest that the proposed algorithms yield significant future potential for improving EEG signal quality in research or clinical settings without the need for marking by expert neurophysiologists, EMG signal recording and user visual inspection. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
NASA Astrophysics Data System (ADS)
Weng, Sheng
Thyroid and parathyroid glands play a vital role in regulating the body's metabolism and calcium levels. Surgical removal of the glands is the main treatment for both thyroid cancer and parathyroid adenoma. In thyroidectomy and parathyroidectomy, it's very important to differentiate thyroid, parathyroid, and the other tissues around the neck. Traditionally, physicians use ultrasound guided fine needle aspiration (FNA) to evaluate thyroid nodules, but up to 30% of FNA results are "inconclusive". The sestamibi scan can localize parathyroid adenoma, but currently it only has 50% accuracy. Here we applied the emerging CARS technique to image both thyroid and parathyroid tissues, which has potential to be used in real-time in vivo examination of different structures. We also developed algorithms to differentiate different cellular structures based on CARS images. When incorporated with a fiber optic endoscope in the future, CARS imaging technique can help surgeons identify cancerous thyroid tissue intraoperatively, preserve good parathyroid glands during thyroidectomy and find parathyroid adenoma during parathyroidectomy.
A Maximum NEC Criterion for Compton Collimation to Accurately Identify True Coincidences in PET
Chinn, Garry; Levin, Craig S.
2013-01-01
In this work, we propose a new method to increase the accuracy of identifying true coincidence events for positron emission tomography (PET). This approach requires 3-D detectors with the ability to position each photon interaction in multi-interaction photon events. When multiple interactions occur in the detector, the incident direction of the photon can be estimated using the Compton scatter kinematics (Compton Collimation). If the difference between the estimated incident direction of the photon relative to a second, coincident photon lies within a certain angular range around colinearity, the line of response between the two photons is identified as a true coincidence and used for image reconstruction. We present an algorithm for choosing the incident photon direction window threshold that maximizes the noise equivalent counts of the PET system. For simulated data, the direction window removed 56%–67% of random coincidences while retaining > 94% of true coincidences from image reconstruction as well as accurately extracted 70% of true coincidences from multiple coincidences. PMID:21317079
A reduction package for cross-dispersed echelle spectrograph data in IDL
NASA Astrophysics Data System (ADS)
Hall, Jeffrey C.; Neff, James E.
1992-12-01
We have written in IDL a data reduction package that performs reduction and extraction of cross-dispersed echelle spectrograph data. The present package includes a complete set of tools for extracting data from any number of spectral orders with arbitrary tilt and curvature. Essential elements include debiasing and flatfielding of the raw CCD image, removal of scattered light background, either nonoptimal or optimal extraction of data, and wavelength calibration and continuum normalization of the extracted orders. A growing set of support routines permits examination of the frame being processed to provide continuing checks on the statistical properties of the data and on the accuracy of the extraction. We will display some sample reductions and discuss the algorithms used. The inherent simplicity and user-friendliness of the IDL interface make this package a useful tool for spectroscopists. We will provide an email distribution list for those interested in receiving the package, and further documentation will be distributed at the meeting.
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
NASA Astrophysics Data System (ADS)
Liu, Zhipeng; Zhang, Bin; Feng, Qi; Chen, Zhaoyang; Lin, Chengyou; Ding, Yingchun
2017-06-01
Focusing light through strongly scattering media plays an important role in biomedical imaging and therapy. Here, we experimentally demonstrate light focusing through ZnO sample by controlling binary amplitude optimization using genetic algorithm. In the experiment, we use a Micro Electro-Mechanical System (MEMS)-based digital micromirror device (DMD) which is in amplitude-only modulation mode. The DMD consists of 1920×1080 square mirrors that can be independently controlled to reflect light to a desired position. We control only 160 thousand mirrors which are divided into 400 segments to modulate light focusing through the scattering media using advanced genetic algorithm. Light intensity at the target position is enhanced up to 50+/-5 times the average speckle intensity. The diameters of focusing spot can be changed ranging from 7 μm to 70 μm at arbitrary positions and multiple foci are obtained simultaneously. The spatial arrangement of multiple foci can be flexibly controlled. The advantage of DMDs lies in their switching speed up to 30 kHz, which has the potential to generate a focus in an ultra-short period of time. Our work provides a reference for the study of high speed wavefront shaping that is required in vivo tissues imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raymund, T.D.
Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less
WE-EF-207-09: Single-Scan Dual-Energy CT Using Primary Modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, M; Zhu, L
Purpose: Compared with conventional CT, dual energy CT (DECT) provides better material differentiation but requires projection data with two different effective x-ray spectra. Current DECT scanners use either a two-scan setting or costly imaging components, which are not feasible or available on open-gantry cone-beam CT systems. We propose a hardware-based method which utilizes primary modulation to enable single-scan DECT on a conventional CT scanner. The CT imaging geometry of primary modulation is identical to that used in our previous method for scatter removal, making it possible for future combination with effective scatter correction on the same CT scanner. Methods: Wemore » insert an attenuation sheet with a spatially-varying pattern - primary modulator-between the x-ray source and the imaged object. During the CT scan, the modulator selectively hardens the x-ray beam at specific detector locations. Thus, the proposed method simultaneously acquires high and low energy data. High and low energy CT images are then reconstructed from projections with missing data via an iterative CT reconstruction algorithm with gradient weighting. Proof-of-concept studies are performed using a copper modulator on a cone-beam CT system. Results: Our preliminary results on the Catphan(c) 600 phantom indicate that the proposed method for single-scan DECT is able to successfully generate high-quality high and low energy CT images and distinguish different materials through basis material decomposition. By applying correction algorithms and using all of the acquired projection data, we can reconstruct a single CT image of comparable image quality to conventional CT images, i.e., without primary modulation. Conclusion: This work shows great promise in using a primary modulator to perform high-quality single-scan DECT imaging. Future studies will test method performance on anthropomorphic phantoms and perform quantitative analyses on image qualities and DECT decomposition accuracy. We will use simulations to optimize the modulator material and geometry parameters.« less
An efficient algorithm for the generalized Foldy-Lax formulation
NASA Astrophysics Data System (ADS)
Huang, Kai; Li, Peijun; Zhao, Hongkai
2013-02-01
Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.
Secin, Fernando P; Bianco, Fernando J; Cronin, Angel; Eastham, James A; Scardino, Peter T; Guillonneau, Bertrand; Vickers, Andrew J
2009-02-01
A publication on behalf of the European Society of Urological Oncology questioned the need for removing the seminal vesicles during radical prostatectomy in patients with prostate specific antigen less than 10 ng/ml except when biopsy Gleason score is greater than 6 or there are greater than 50% positive biopsy cores. We applied the European Society of Urological Oncology algorithm to an independent data set to determine its predictive value. Data on 1,406 men who underwent radical prostatectomy and seminal vesicle removal between 1998 and 2004 were analyzed. Patients with and without seminal vesicle invasion were classified as positive or negative according to the European Society of Urological Oncology algorithm. Of 90 cases with seminal vesicle invasion 81 (6.4%) were positive for 90% sensitivity, while 656 of 1,316 without seminal vesicle invasion were negative for 50% specificity. The negative predictive value was 98.6%. In decision analytic terms if the loss in health when seminal vesicles are invaded and not completely removed is considered at least 75 times greater than when removing them unnecessarily, the algorithm proposed by the European Society of Urological Oncology should not be used. Whether to use the European Society of Urological Oncology algorithm depends not only on its accuracy, but also on the relative clinical consequences of false-positive and false-negative results. Our threshold of 75 is an intermediate value that is difficult to interpret, given uncertainties about the benefit of seminal vesicle sparing and harm associated with untreated seminal vesicle invasion. We recommend more formal decision analysis to determine the clinical value of the European Society of Urological Oncology algorithm.
Matt Busse
2010-01-01
The ecological effects of post-thinning slash retention on vegetation, wildlife browse, and soil were evaluated in sixty-year-old stands of second-growth pine in central Oregon. Three slash-retention treatments were compared: whole-tree removal, bole-only removal, and thin no removal (boles and slash scattered on site). The study intent was to create a wide gradient of...
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
NASA Astrophysics Data System (ADS)
Mishra, Puneet; Singla, Sunil Kumar
2013-01-01
In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.
Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals
NASA Astrophysics Data System (ADS)
De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.
2016-08-01
When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.
Electromagnetic scattering calculations on the Intel Touchstone Delta
NASA Technical Reports Server (NTRS)
Cwik, Tom; Patterson, Jean; Scott, David
1992-01-01
During the first year's operation of the Intel Touchstone Delta system, software which solves the electric field integral equations for fields scattered from arbitrarily shaped objects has been transferred to the Delta. To fully realize the Delta's resources, an out-of-core dense matrix solution algorithm that utilizes some or all of the 90 Gbyte of concurrent file system (CFS) has been used. The largest calculation completed to date computes the fields scattered from a perfectly conducting sphere modeled by 48,672 unknown functions, resulting in a complex valued dense matrix needing 37.9 Gbyte of storage. The out-of-core LU matrix factorization algorithm was executed in 8.25 h at a rate of 10.35 Gflops. Total time to complete the calculation was 19.7 h-the additional time was used to compute the 48,672 x 48,672 matrix entries, solve the system for a given excitation, and compute observable quantities. The calculation was performed in 64-b precision.
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Design of a Synthetic Aperture Array to Support Experiments in Active Control of Scattering
1990-06-01
becomes necessary to validate the theory and test the control system algorithms . While experiments in open water would be most like the anticipated...mathematical development of the beamforming algorithms used as well as an estimate of their applicability to the specifics of beamforming in a reverberant...Chebyshev array have been proposed. The method used in ARRAY, a nested product algorithm , proposed by Bresler [21] is recommended by Pozar [19] and
Algorithm development for Maxwell's equations for computational electromagnetism
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.
1990-01-01
A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.
Microwave imaging by three-dimensional Born linearization of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Caorsi, S.; Gragnani, G. L.; Pastorino, M.
1990-11-01
An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.
Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game
NASA Astrophysics Data System (ADS)
Slyamov, Azat
2017-12-01
We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.
A New and Fast Method for Smoothing Spectral Imaging Data
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Liu, Ming; Davis, Curtiss O.
1998-01-01
The Airborne Visible Infrared Imaging Spectrometer (AVIRIS) acquires spectral imaging data covering the 0.4 - 2.5 micron wavelength range in 224 10-nm-wide channels from a NASA ER-2 aircraft at 20 km. More than half of the spectral region is affected by atmospheric gaseous absorption. Over the past decade, several techniques have been used to remove atmospheric effects from AVIRIS data for the derivation of surface reflectance spectra. An operational atmosphere removal algorithm (ATREM), which is based on theoretical modeling of atmospheric absorption and scattering effects, has been developed and updated for deriving surface reflectance spectra from AVIRIS data. Due to small errors in assumed wavelengths and errors in line parameters compiled on the HITRAN database, small spikes (particularly near the centers of the 0.94- and 1.14-micron water vapor bands) are present in this spectrum. Similar small spikes are systematically present in entire ATREM output cubes. These spikes have distracted geologists who are interested in studying surface mineral features. A method based on the "global" fitting of spectra with low order polynomials or other functions for removing these weak spikes has recently been developed by Boardman (this volume). In this paper, we describe another technique, which fits spectra "locally" based on cubic spline smoothing, for quick post processing of ATREM apparent reflectance spectra derived from AVIRIS data. Results from our analysis of AVIRIS data acquired over Cuprite mining district in Nevada in June of 1995 are given. Comparisons between our smoothed spectra and those derived with the empirical line method are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulmer, W.
2015-06-15
Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less
On the Compton scattering redistribution function in plasma
NASA Astrophysics Data System (ADS)
Madej, J.; Różańska, A.; Majczyna, A.; Należyty, M.
2017-08-01
Compton scattering is the dominant opacity source in hot neutron stars, accretion discs around black holes and hot coronae. We collected here a set of numerical expressions of the Compton scattering redistribution functions (RFs) for unpolarized radiation, which are more exact than the widely used Kompaneets equation. The principal aim of this paper is the presentation of the RF by Guilbert, which is corrected for the computational errors in the original paper. This corrected RF was used in the series of papers on model atmosphere computations of hot neutron stars. We have also organized four existing algorithms for the RF computations into a unified form ready to use in radiative transfer and model atmosphere codes. The exact method by Nagirner & Poutanen was numerically compared to all other algorithms in a very wide spectral range from hard X-rays to radio waves. Sample computations of the Compton scattering RFs in thermal plasma were done for temperatures corresponding to the atmospheres of bursting neutron stars and hot intergalactic medium. Our formulae are also useful to study the Compton scattering of unpolarized microwave background radiation in hot intracluster gas and the Sunyaev-Zeldovich effect. We conclude that the formulae by Guilbert and the exact quantum mechanical formulae yield practically the same RFs for gas temperatures relevant to the atmospheres of X-ray bursting neutron stars, T ≤ 108 K.
High-resolution seismic data regularization and wavefield separation
NASA Astrophysics Data System (ADS)
Cao, Aimin; Stump, Brian; DeShon, Heather
2018-04-01
We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.
Intraocular light scatter, reflections, fluorescence and absorption: what we see in the slit lamp.
van den Berg, Thomas J T P
2018-01-01
Much knowledge has been collected over the past 20 years about light scattering in the eye- in particular in the eye lens- and its visual effect, called straylight. It is the purpose of this review to discuss how these insights can be applied to understanding the slit lamp image. The slit lamp image mainly results from back scattering, whereas the effects on vision result mainly from forward scatter. Forward scatter originates from particles of about wavelength size distributed throughout the lens. Most of the slit lamp image originates from small particle scatter (Rayleigh scatter). For a population of middle aged lenses it will be shown that both these scatter components remove around 10% of the light from the direct beam. For slit lamp observation close to the reflection angles, zones of discontinuity (Wasserspalten) at anterior and posterior parts of the lens show up as rough surface reflections. All these light scatter effects increase with age, but the correlations with age, and also between the different components, are weak. For retro-illumination imaging it will be argued that the density or opacity seen in areas of cortical or posterior subcapsular cataract show up because of light scattering, not because of light loss. NOTES: (1) Light scatter must not be confused with aberrations. Light penetrating the eye is divided into two parts: a relatively small part is scattered, and removed from the direct beam. Most of the light is not scattered, but continues as the direct beam. This non-scattered part is the basis for functional imaging, but its quality is under the control of aberrations. Aberrations deflect light mainly over small angles (<1°), whereas light scatter is important because of the straylight effects over large angles (>1°), causing problems like glare and hazy vision. (2) The slit lamp image in older lenses and nuclear cataract is strongly influenced by absorption. However, this effect is greatly exaggerated by the light path lengths concerned. This obviates proper judgement of the functional importance of absorption, and hinders the appreciation of the Rayleigh nature of what is seen in the slit lamp image. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Worldwide Ocean Optics Database (WOOD)
2002-09-30
attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the computed results. Extensive algorithm...empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the...properties, including diffuse attenuation, beam attenuation, and scattering. Data from ONR-funded bio-optical cruises will be given priority for loading
Worldwide Ocean Optics Database (WOOD)
2001-09-30
user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated
Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts
PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS
2017-01-01
Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876
Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material
NASA Astrophysics Data System (ADS)
Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.
2014-02-01
Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.
Yaguchi, Shigeo; Nishihara, Hitoshi; Kambhiranond, Waraporn; Stanley, Daniel; Apple, David J
2008-01-01
To investigate the cause of light scatter measured on the surface of AcrySof intraocular lenses (Alcon Laboratories, Inc., Fort Worth, TX) retrieved from pseudophakic postmortem human eyes. Ten intraocular lenses (Alcon AcrySofModel MA60BM) were retrieved postmortem and analyzed for light scatter before and after removal of surface-bound biofilms. Six of the 10 lenses exhibited light scatter that was clearly above baseline levels. In these 6 lenses, both peak and average pixel density were reduced by approximately 80% after surface cleaning. The current study demonstrates that a coating deposited in vivo on the lens surface is responsible for the light scatter observed when incident light is applied.
An outlet breaching algorithm for the treatment of closed depressions in a raster DEM
NASA Astrophysics Data System (ADS)
Martz, Lawrence W.; Garbrecht, Jurgen
1999-08-01
Automated drainage analysis of raster DEMs typically begins with the simulated filling of all closed depressions and the imposition of a drainage pattern on the resulting flat areas. The elimination of closed depressions by filling implicitly assumes that all depressions are caused by elevation underestimation. This assumption is difficult to support, as depressions can be produced by overestimation as well as by underestimation of DEM values.This paper presents a new algorithm that is applied in conjunction with conventional depression filling to provide a more realistic treatment of those depressions that are likely due to overestimation errors. The algorithm lowers the elevation of selected cells on the edge of closed depressions to simulate breaching of the depression outlets. Application of this breaching algorithm prior to depression filling can substantially reduce the number and size of depressions that need to be filled, especially in low relief terrain.Removing or reducing the size of a depression by breaching implicitly assumes that the depression is due to a spurious flow blockage caused by elevation overestimation. Removing a depression by filling, on the other hand, implicitly assumes that the depression is a direct artifact of elevation underestimation. Although the breaching algorithm cannot distinguish between overestimation and underestimation errors in a DEM, a constraining parameter for breaching length can be used to restrict breaching to closed depressions caused by narrow blockages along well-defined drainage courses. These are considered the depressions most likely to have arisen from overestimation errors. Applying the constrained breaching algorithm prior to a conventional depression-filling algorithm allows both positive and negative elevation adjustments to be used to remove depressions.The breaching algorithm was incorporated into the DEM pre-processing operations of the TOPAZ software system. The effect of the algorithm is illustrated by the application of TOPAZ to a DEM of a low-relief landscape. The use of the breaching algorithm during DEM pre-processing substantially reduced the number of cells that needed to be subsequently raised in elevation to remove depressions. The number and kind of depression cells that were eliminated by the breaching algorithm suggested that the algorithm effectively targeted those topographic situations for which it was intended. A detailed inspection of a portion of the DEM that was processed using breaching algorithm in conjunction with depression-filling also suggested the effects of the algorithm were as intended.The breaching algorithm provides an empirically satisfactory and robust approach to treating closed depressions in a raster DEM. It recognises that depressions in certain topographic settings are as likely to be due to elevation overestimation as to elevation underestimation errors. The algorithm allows a more realistic treatment of depressions in these situations than conventional methods that rely solely on depression-filling.
Wave propagation, scattering and emission in complex media
NASA Astrophysics Data System (ADS)
Jin, Ya-Qiu
I. Polarimetric scattering and SAR imagery. EM wave propagation and scattering in polarimetric SAR interferometry / S. R. Cloude. Terrain topographic inversion from single-pass polarimetric SAR image data by using polarimetric stokes parameters and morphological algorithm / Y. Q. Jin, L. Luo. Road detection in forested area using polarimetric SAR / G. W. Dong ... [et al.]. Research on some problems about SAR radiometric resolution / G. Dong ... [et al.]. A fast image matching algorithm for remote sensing applications / Z. Q. Hou ... [et al.]. A new algorithm of noised remote sensing image fusion based on steerable filters / X. Kang ... [et al.]. Adaptive noise reduction of InSAR data based on anisotropic diffusion models and their applications to phase unwrapping / C. Wang, X. Gao, H. Zhang -- II. Scattering from randomly rough surfaces. Modeling tools for backscattering from rough surfaces / A. K. Fung, K. S. Chen. Pseudo-nondiffracting beams from rough surface scattering / E. R. Méndez, T. A. Leskova, A. A. Maradudin. Surface roughness clutter effects in GPR modeling and detection / C. Rappaport. Scattering from rough surfaces with small slopes / M. Saillard, G. Soriano. Polarization and spectral characteristics of radar signals reflected by sea-surface / V. A. Butko, V. A. Khlusov, L. I. Sharygina. Simulation of microwave scattering from wind-driven ocean surfaces / M. Y. Xia ... [et al.]. HF surface wave radar tests at the Eastern China Sea / X. B. Wu ... [et al.] -- III. Electromagnetics of complex materials. Wave propagation in plane-parallel metamaterial and constitutive relations / A. Ishimaru ... [et al.]. Two dimensional periodic approach for the study of left-handed metamaterials / T. M. Grzegorczyk ... [et al.]. Numerical analysis of the effective constitutive parameters of a random medium containing small chiral spheres / Y. Nanbu, T. Matsuoka, M. Tateiba. Wave propagation in inhomogeneous media: from the Helmholtz to the Ginzburg -Landau equation / M. Gitterman. Transformation of the spectrum of scattered radiation in randomly inhomogeneous absorptive plasma layer / G. V. Jandieri, G. D. Aburjunia, V. G. Jandieri. Numerical analysis of microwave heating on saponification reaction / K. Huang, K. Jia -- IV. Scattering from complex targets. Analysis of electromagnetic scattering from layered crossed-gratings of circular cylinders using lattice sums technique / K. Yasumoto, H. T. Jia. Scattering by a body in a random medium / M. Tateiba, Z. Q. Meng, H. El-Ocla. A rigorous analysis of electromagnetic scattering from multilayered crossed-arrays of metallic cylinders / H. T. Jia, K. Yasumoto. Vector models of non-stable and spatially-distributed radar objects / A. Surkov ... [et al.]. Simulation of algorithm of orthogonal signals forming and processing used to estimate back scattering matrix of non-stable radar objects / D. Nosov ... [et al.]. New features of scattering from a dielectric film on a reflecting metal substrate / Z. H. Gu, I. M. Fuks, M. Ciftan. A higher order FDTD method for EM wave propagation in collision plasmas / S. B. Liu, J. J. Mo, N. C. Yuan -- V. Radiative transfer and remote sensing. Simulating microwave emission from Antarctica ice sheet with a coherent model / M. Tedesco, P. Pampaloni. Scattering and emission from inhomogeneous vegetation canopy and alien target by using three-dimensional Vector Radiative Transfer (3D-VRT) equation / Y. Q. Jin, Z. C. Liang. Analysis of land types using high-resolution satellite images and fractal approach / H. G. Zhang ... [et al.]. Data fusion of RADARSAT SAR and DMSP SSM/I for monitoring sea ice of China's Bohai Sea / Y. Q. Jin. Retrieving atmospheric temperature profiles from simulated microwave radiometer data with artificial neural networks / Z. G. Yao, H. B. Chen -- VI. Wave propagation and wireless communication. Wireless propagation in urban environments: modeling and experimental verification / D. Erricolo ... [et al.]. An overview of physics-based wave propagation in forested environment / K. Sarabandi, I. Koh. Angle-of-arrival fluctuations due to meteorological conditions in the diffraction zone of C-band radio waves, propagated over the ground surface / T. A. Tyufilina, A. A. Meschelyakov, M. V. Krutikov. Simulating radio channel statistics using ray based prediction codes / H. L. Bertoni. Measurement and simulation of ultra wideband antenna elements / W. Sörgel, W. Wiesbeck. The experimental investigation of a ground-placed radio complex synchronization system / V. P. Denisov ... [et al.] -- VII. Computational electromagnetics. Analysis of 3-D electromagnetic wave scattering with the Krylov subspace FFT iterative methods / R. S. Chen ... [et al.]. Sparse approximate inverse preconditioned iterative algorithm with block toeplitz matrix for fast analysis of microstrip circuits / L. Mo, R. S. Chen, E. K. N. Yung. An Efficient modified interpolation technique for the translation operators in MLFMA / J. Hu, Z. P. Nie, G. X. Zou. Efficient solution of 3-D vector electromagnetic scattering by CG-MLFMA with partly approximate iteration / J. Hu, Z. P. Nie. The effective constitution at interface of different media / L. G. Zheng, W. X. Zhang. Novel basis functions for quadratic hexahedral edge element / P. Liu ... [et al.]. A higher order FDTD method for EM wave propagation in collision plasmas / S. B. Liu, J. J. Mo, N. C. Yuan. Attenuation of electric field eradiated by underground source / J. P. Dong, Y. G. Gao.
Three-Dimensional Model of the Scatterer Distribution in Cirrhotic Liver
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Nakamura, Keigo; Hachiya, Hiroyuki
2003-05-01
Ultrasonic B-mode images are affected by changes in scatterer distribution. It is hard to estimate the relationship between the ultrasonic image and the tissue structure quantitatively because we cannot observe the continuous stages of liver cirrhosis tissue clinically, particularly the beginning stage. In this paper, we propose a three-dimensional modeling method of scatterer distribution for normal and cirrhotic livers to confirm the influence of the change in the form of scatterer distribution on echo information. The algorithm of the method includes parameters which determine the expansion of nodules and fibers. Using the B-mode images which are obtained from these scatterer distributions, we analyze the relationship between the changes in the form of biological tissue and the changes in the B-mode images during progressive liver cirrhosis.
Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project
NASA Astrophysics Data System (ADS)
La Hoz, C.; Belyey, V.
2012-12-01
EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.
Gordon, H R; Brown, J W; Evans, R H
1988-03-01
For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
Ultraviolet complex refractive index of Martian dust Laboratory measurements of terrestrial analogs
NASA Technical Reports Server (NTRS)
Egan, W. G.; Hilgeman, T.; Pang, K.
1975-01-01
The optical complex index of refraction of four candidate Martian surface materials has been determined between 0.185 and 0.4 microns using a modified Kubelka-Munk scattering theory. The cadidate materials were limonite, andesite, montmorillonite, and basalt. The effect of scattering has been removed from the results. Also presented are diffuse reflection and transmission data on these samples.
Time reversal and phase coherent music techniques for super-resolution ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Labyed, Yassin
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.
Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.
Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong
2017-09-01
An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
Filetype Identification Using Long, Summarized N-Grams
2011-03-01
compressed or encrypted data . If the algorithm used to compress or encrypt the data can be determined, then it is frequently possible to uncom- press...fragments. His implementation utilized the bzip2 library to compress the file fragments. The bzip2 library is based off the Lempel - Ziv -Markov chain... algorithm that uses a dictionary compression scheme to remove repeating data patterns within a set of data . The removed patterns are listed within the
A masked least-squares smoothing procedure for artifact reduction in scanning-EMG recordings.
Corera, Íñigo; Eciolaza, Adrián; Rubio, Oliver; Malanda, Armando; Rodríguez-Falces, Javier; Navallas, Javier
2018-01-11
Scanning-EMG is an electrophysiological technique in which the electrical activity of the motor unit is recorded at multiple points along a corridor crossing the motor unit territory. Correct analysis of the scanning-EMG signal requires prior elimination of interference from nearby motor units. Although the traditional processing based on the median filtering is effective in removing such interference, it distorts the physiological waveform of the scanning-EMG signal. In this study, we describe a new scanning-EMG signal processing algorithm that preserves the physiological signal waveform while effectively removing interference from other motor units. To obtain a cleaned-up version of the scanning signal, the masked least-squares smoothing (MLSS) algorithm recalculates and replaces each sample value of the signal using a least-squares smoothing in the spatial dimension, taking into account the information of only those samples that are not contaminated with activity of other motor units. The performance of the new algorithm with simulated scanning-EMG signals is studied and compared with the performance of the median algorithm and tested with real scanning signals. Results show that the MLSS algorithm distorts the waveform of the scanning-EMG signal much less than the median algorithm (approximately 3.5 dB gain), being at the same time very effective at removing interference components. Graphical Abstract The raw scanning-EMG signal (left figure) is processed by the MLSS algorithm in order to remove the artifact interference. Firstly, artifacts are detected from the raw signal, obtaining a validity mask (central figure) that determines the samples that have been contaminated by artifacts. Secondly, a least-squares smoothing procedure in the spatial dimension is applied to the raw signal using the not contaminated samples according to the validity mask. The resulting MLSS-processed scanning-EMG signal (right figure) is clean of artifact interference.
Adiabatic Quantum Search in Open Systems.
Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D
2016-10-07
Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.
Tissue artifact removal from respiratory signals based on empirical mode decomposition.
Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-05-01
On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawke, J.; Scannell, R.; Maslov, M.
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less
Konevskikh, Tatiana; Ponossov, Arkadi; Blümel, Reinhold; Lukacs, Rozalia; Kohler, Achim
2015-06-21
The appearance of fringes in the infrared spectroscopy of thin films seriously hinders the interpretation of chemical bands because fringes change the relative peak heights of chemical spectral bands. Thus, for the correct interpretation of chemical absorption bands, physical properties need to be separated from chemical characteristics. In the paper at hand we revisit the theory of the scattering of infrared radiation at thin absorbing films. Although, in general, scattering and absorption are connected by a complex refractive index, we show that for the scattering of infrared radiation at thin biological films, fringes and chemical absorbance can in good approximation be treated as additive. We further introduce a model-based pre-processing technique for separating fringes from chemical absorbance by extended multiplicative signal correction (EMSC). The technique is validated by simulated and experimental FTIR spectra. It is further shown that EMSC, as opposed to other suggested filtering methods for the removal of fringes, does not remove information related to chemical absorption.
Scattering Cross Section of Sound Waves by the Modal Element Method
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1994-01-01
#he modal element method has been employed to determine the scattered field from a plane acoustic wave impinging on a two dimensional body. In the modal element method, the scattering body is represented by finite elements, which are coupled to an eigenfunction expansion representing the acoustic pressure in the infinite computational domain surrounding the body. The present paper extends the previous work by developing the algorithm necessary to calculate the acoustics scattering cross section by the modal element method. The scattering cross section is the acoustical equivalent to the Radar Cross Section (RCS) in electromagnetic theory. Since the scattering cross section is evaluated at infinite distance from the body, an asymptotic approximation is used in conjunction with the standard modal element method. For validation, the scattering cross section of the rigid circular cylinder is computed for the frequency range 0.1 is less than or equal to ka is less than or equal to 100. Results show excellent agreement with the analytic solution.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Removal of Stationary Sinusoidal Noise from Random Vibration Signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian; Cap, Jerome S.
In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Solid harmonic wavelet scattering for predictions of molecule properties
NASA Astrophysics Data System (ADS)
Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis
2018-06-01
We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.
NASA Astrophysics Data System (ADS)
Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias
2017-07-01
In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Defect detection around rebars in concrete using focused ultrasound and reverse time migration.
Beniwal, Surendra; Ganguli, Abhijit
2015-09-01
Experimental and numerical investigations have been performed to assess the feasibility of damage detection around rebars in concrete using focused ultrasound and a Reverse Time Migration (RTM) based subsurface imaging algorithm. Since concrete is heterogeneous, an unfocused ultrasonic field will be randomly scattered by the aggregates, thereby masking information about damage(s). A focused ultrasonic field, on the other hand, increases the possibility of detection of an anomaly due to enhanced amplitude of the incident field in the focal region. Further, the RTM based reconstruction using scattered focused field data is capable of creating clear images of the inspected region of interest. Since scattering of a focused field by a damaged rebar differs qualitatively from that of an undamaged rebar, distinct images of damaged and undamaged situations are obtained in the RTM generated images. This is demonstrated with both numerical and experimental investigations. The total scattered field, acquired on the surface of the concrete medium, is used as input for the RTM algorithm to generate the subsurface image that helps to identify the damage. The proposed technique, therefore, has some advantage since knowledge about the undamaged scenario for the concrete medium is not necessary to assess its integrity. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Boness, D. A.; Terrell-Martinez, B.
2010-12-01
As part of an ongoing undergraduate research project of light scattering calculations involving fractal carbonaceous soot aggregates relevant to current anthropogenic and natural sources in Earth's atmosphere, we have read with interest a recent paper [E.T. Wolf and O.B Toon,Science 328, 1266 (2010)] claiming that the Faint Young Sun paradox discussed four decades ago by Carl Sagan and others can be resolved without invoking heavy CO2 concentrations as a greenhouse gas warming the early Earth enough to sustain liquid water and hence allow the origin of life. Wolf and Toon report that a Titan-like Archean Earth haze, with a fractal haze aggregate nature due to nitrogen-methane photochemistry at high altitudes, should block enough UV light to protect the warming greenhouse gas NH3 while allowing enough visible light to reach the surface of the Earth. To test this hypothesis, we have employed a rigorous T-Matrix arbitrary-particle light scattering technique, to avoid the simplifications inherent in Mie-sphere scattering, on haze fractal aggregates at UV and visible wavelenths of incident light. We generate these model aggregates using diffusion-limited cluster aggregation (DLCA) algorithms, which much more closely fit actual haze fractal aggregates than do diffusion-limited aggregation (DLA) algorithms.
Hyperspectral imaging simulation of object under sea-sky background
NASA Astrophysics Data System (ADS)
Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui
2016-10-01
Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.
NASA Astrophysics Data System (ADS)
Dixon, David A.; Hughes, H. Grady
2017-09-01
This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied.
Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S
2014-12-01
Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Stacy Pease; Peter F. Ffolliott; Leonard F. DeBano; Gerald J. Gottfried
2000-01-01
Effects of complete removal of mesquite overstory, complete removal of mesquite overstory with control of post-treatment sprouts, and retention of the mesquite overstory as a control on herbage production are described. Mulching treatments included applications of a chip mulch, a commercial compost, lopped-and-scattered mesquite branchwood, and an untreated control....
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
Mourant, Judith R.; Bocklage, Thérese J.; Powers, Tamara M.; Greene, Heather M.; Dorin, Maxine H.; Waxman, Alan G.; Zsemlye, Meggan M.; Smith, Harriet O.
2009-01-01
Objective To examine the utility of in vivo elastic light scattering measurements to identify cervical intraepithelial neoplasias (CIN) 2/3 and cancers in women undergoing colposcopy and to determine the effects of patient characteristics such as menstrual status on the elastic light scattering spectroscopic measurements. Materials and Methods A fiber optic probe was used to measure light transport in the cervical epithelium of patients undergoing colposcopy. Spectroscopic results from 151 patients were compared with histopathology of the measured and biopsied sites. A method of classifying the measured sites into two clinically relevant categories was developed and tested using five-fold cross-validation. Results Statistically significant effects by age at diagnosis, menopausal status, timing of the menstrual cycle, and oral contraceptive use were identified, and adjustments based upon these measurements were incorporated in the classification algorithm. A sensitivity of 77±5% and a specificity of 62±2% were obtained for separating CIN 2/3 and cancer from other pathologies and normal tissue. Conclusions The effects of both menstrual status and age should be taken into account in the algorithm for classifying tissue sites based on elastic light scattering spectroscopy. When this is done, elastic light scattering spectroscopy shows good potential for real-time diagnosis of cervical tissue at colposcopy. Guiding biopsy location is one potential near-term clinical application area, while facilitating ”see and treat” protocols is a longer term goal. Improvements in accuracy are essential. PMID:20694193
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
A feature-preserving hair removal algorithm for dermoscopy images.
Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar
2013-02-01
Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.
Motion artifact removal algorithm by ICA for e-bra: a women ECG measurement system
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Oh, Sechang; Varadan, Vijay K.
2013-04-01
Wearable ECG(ElectroCardioGram) measurement systems have increasingly been developing for people who suffer from CVD(CardioVascular Disease) and have very active lifestyles. Especially, in the case of female CVD patients, several abnormal CVD symptoms are accompanied with CVDs. Therefore, monitoring women's ECG signal is a significant diagnostic method to prevent from sudden heart attack. The E-bra ECG measurement system from our previous work provides more convenient option for women than Holter monitor system. The e-bra system was developed with a motion artifact removal algorithm by using an adaptive filter with LMS(least mean square) and a wandering noise baseline detection algorithm. In this paper, ICA(independent component analysis) algorithms are suggested to remove motion artifact factor for the e-bra system. Firstly, the ICA algorithms are developed with two kinds of statistical theories: Kurtosis, Endropy and evaluated by performing simulations with a ECG signal created by sgolayfilt function of MATLAB, a noise signal including 0.4Hz, 1.1Hz and 1.9Hz, and a weighed vector W estimated by kurtosis or entropy. A correlation value is shown as the degree of similarity between the created ECG signal and the estimated new ECG signal. In the real time E-Bra system, two pseudo signals are extracted by multiplying with a random weighted vector W, the measured ECG signal from E-bra system, and the noise component signal by noise extraction algorithm from our previous work. The suggested ICA algorithm basing on kurtosis or entropy is used to estimate the new ECG signal Y without noise component.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
A three-image algorithm for hard x-ray grating interferometry.
Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia
2013-08-12
A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
An algorithm for finding a similar subgraph of all Hamiltonian cycles
NASA Astrophysics Data System (ADS)
Wafdan, R.; Ihsan, M.; Suhaimi, D.
2018-01-01
This paper discusses an algorithm to find a similar subgraph called findSimSubG algorithm. A similar subgraph is a subgraph with a maximum number of edges, contains no isolated vertex and is contained in every Hamiltonian cycle of a Hamiltonian Graph. The algorithm runs only on Hamiltonian graphs with at least two Hamiltonian cycles. The algorithm works by examining whether the initial subgraph of the first Hamiltonian cycle is a subgraph of comparison graphs. If the initial subgraph is not in comparison graphs, the algorithm will remove edges and vertices of the initial subgraph that are not in comparison graphs. There are two main processes in the algorithm, changing Hamiltonian cycle into a cycle graph and removing edges and vertices of the initial subgraph that are not in comparison graphs. The findSimSubG algorithm can find the similar subgraph without using backtracking method. The similar subgraph cannot be found on certain graphs, such as an n-antiprism graph, complete bipartite graph, complete graph, 2n-crossed prism graph, n-crown graph, n-möbius ladder, prism graph, and wheel graph. The complexity of this algorithm is O(m|V|), where m is the number of Hamiltonian cycles and |V| is the number of vertices of a Hamiltonian graph.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
NASA Astrophysics Data System (ADS)
Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter
2016-10-01
We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity curves. Wave motion simulations for two earthquakes are in good agreement with the real data results and confirm the identification of the wave scattering formations to the south of the array, which generate the scattered Love waves visible for all earthquakes.
NASA Technical Reports Server (NTRS)
Vasilkov, Alexander; Joiner, Joanna; Spurr, Robert; Bhartia, Pawan K.; Levelt, Pieternel; Stephens, Graeme
2009-01-01
In this paper we examine differences between cloud pressures retrieved from the Ozone Monitoring Instrument (OMI) using the ultraviolet rotational Raman scattering (RRS) algorithm and those from the thermal infrared (IR) Aqua/MODIS. Several cloud data sets are currently being used in OMI trace gas retrieval algorithms including climatologies based on IR measurements and simultaneous cloud parameters derived from OMI. From a validation perspective, it is important to understand the OMI retrieved cloud parameters and how they differ with those derived from the IR. To this end, we perform radiative transfer calculations to simulate the effects of different geophysical conditions on the OMI RRS cloud pressure retrievals. We also quantify errors related to the use of the Mixed Lambert-Equivalent Reflectivity (MLER) concept as currently implemented of the OMI algorithms. Using properties from the Cloudsat radar and MODIS, we show that radiative transfer calculations support the following: (1) The MLER model is adequate for single-layer optically thick, geometrically thin clouds, but can produce significant errors in estimated cloud pressure for optically thin clouds. (2) In a two-layer cloud, the RRS algorithm may retrieve a cloud pressure that is either between the two cloud decks or even beneath the top of the lower cloud deck because of scattering between the cloud layers; the retrieved pressure depends upon the viewing geometry and the optical depth of the upper cloud deck. (3) Absorbing aerosol in and above a cloud can produce significant errors in the retrieved cloud pressure. (4) The retrieved RRS effective pressure for a deep convective cloud will be significantly higher than the physical cloud top pressure derived with thermal IR.
NASA Astrophysics Data System (ADS)
Alexandrov, M. D.; Mishchenko, M. I.
2017-12-01
Accurate aerosol retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. We suggested to address this ill-posedness by flying a bistatic lidar system. Such a system would consist of formation flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and an additional platform hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar. Thus, bistatic lidar observations will be free of deficiencies affecting both monostatic lidar measurements (caused by the highly limited information content) and passive photopolarimetric measurements (caused by vertical integration and surface reflection).We present a preliminary aerosol retrieval algorithm for a bistatic lidar system consisting of a high spectral resolution lidar (HSRL) and an additional receiver flown in formation with it at a scattering angle of 165 degrees. This algorithm was applied to synthetic data generated using Mie-theory computations. The model/retrieval parameters in our tests were the effective radius and variance of the aerosol size distribution, complex refractive index of the particles, and their number concentration. Both mono- and bimodal aerosol mixtures were considered. Our algorithm allowed for definitive evaluation of error propagation from measurements to retrievals using a Monte Carlo technique, which involves random distortion of the observations and statistical characterization of the resulting retrieval errors. Our tests demonstrated that supplementing a conventional monostatic HSRL with an additional receiver dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
NASA Astrophysics Data System (ADS)
Kuo, Chih-Hao
Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.
Modified kernel-based nonlinear feature extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Perkins, S. J.; Theiler, J. P.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determinedmore » by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.« less
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Removing Ambiguities In Remotely Sensed Winds
NASA Technical Reports Server (NTRS)
Shaffer, Scott J.; Dunbar, Roy S.; Hsiao, Shuchi V.; Long, David G.
1991-01-01
Algorithm removes ambiguities in choices of candidate ocean-surface wind vectors estimated from measurements of radar backscatter from ocean waves. Increases accuracies of estimates of winds without requiring new instrumentation. Incorporates vector-median filtering function.
A Geometrical-Statistical Approach to Outlier Removal for TDOA Measurements
NASA Astrophysics Data System (ADS)
Compagnoni, Marco; Pini, Alessia; Canclini, Antonio; Bestagini, Paolo; Antonacci, Fabio; Tubaro, Stefano; Sarti, Augusto
2017-08-01
The curse of outlier measurements in estimation problems is a well known issue in a variety of fields. Therefore, outlier removal procedures, which enables the identification of spurious measurements within a set, have been developed for many different scenarios and applications. In this paper, we propose a statistically motivated outlier removal algorithm for time differences of arrival (TDOAs), or equivalently range differences (RD), acquired at sensor arrays. The method exploits the TDOA-space formalism and works by only knowing relative sensor positions. As the proposed method is completely independent from the application for which measurements are used, it can be reliably used to identify outliers within a set of TDOA/RD measurements in different fields (e.g. acoustic source localization, sensor synchronization, radar, remote sensing, etc.). The proposed outlier removal algorithm is validated by means of synthetic simulations and real experiments.
Evaluation of different tissue de-paraffinization procedures for infrared spectral imaging.
Nallala, Jayakrupakar; Lloyd, Gavin Rhys; Stone, Nicholas
2015-04-07
In infrared spectral histopathology, paraffin embedded tissues are often de-paraffinized using chemical agents such as xylene and hexane. These chemicals are known to be toxic and the routine de-waxing procedure is time consuming. A comparative study was carried out to identify alternate de-paraffinization methods by using paraffin oil and electronic de-paraffinization (using a mathematical computer algorithm) and their effectiveness was compared to xylene and hexane. Sixteen adjacent tissue sections obtained from a single block of a normal colon tissue were de-paraffinized using xylene, hexane and paraffin oil (+ hexane wash) at five different time points each for comparison. One section was reserved unprocessed for electronic de-paraffinization based on a modified extended multiplicative signal correction (EMSC). IR imaging was carried out on these tissue sections. Coefficients based on the fit of a pure paraffin model to the IR images were then calculated to estimate the amount of paraffin remaining after processing. Results indicate that on average xylene removes more paraffin in comparison to hexane and paraffin oil although the differences were small. This makes paraffin oil, followed by a hexane wash, an interesting and less toxic alternative method of de-paraffinization. However, none of the chemical methods removed paraffin completely from the tissues at any given time point. Moreover, paraffin was removed more easily from the glandular regions than the connective tissue regions indicating a form of differential paraffin retention based on the histology. In such cases, the use of electronic de-paraffinization to neutralize such variances across different tissue regions might be considered. Moreover it is faster, reduces scatter artefacts by index matching and enables samples to be easily stored for further analysis if required.
Fast-response and scattering-free polymer network liquid crystals for infrared light modulators
NASA Astrophysics Data System (ADS)
Fan, Yun-Hsing; Lin, Yi-Hsin; Ren, Hongwen; Gauza, Sebastian; Wu, Shin-Tson
2004-02-01
A fast-response and scattering-free homogeneously aligned polymer network liquid crystal (PNLC) light modulator is demonstrated at λ=1.55 μm wavelength. Light scattering in the near-infrared region is suppressed by optimizing the polymer concentration such that the network domain sizes are smaller than the wavelength. The strong polymer network anchoring assists LC to relax back quickly as the electric field is removed. As a result, the PNLC response time is ˜250× faster than that of the E44 LC mixture except that the threshold voltage is increased by ˜25×.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albanese, K; Morris, R; Lakshmanan, M
Purpose: To accurately model different breast geometries using a tissue equivalent phantom, and to classify these tissues in a coherent x-ray scatter imaging system. Methods: A breast phantom has been designed to assess the capability of coded aperture coherent x-ray scatter imaging system to classify different types of breast tissue (adipose, fibroglandular, tumor). The tissue-equivalent phantom was modeled as a hollow plastic cylinder containing multiple cylindrical and spherical inserts that can be positioned, rearranged, or removed to model different breast geometries. Each enclosure can be filled with a tissue-equivalent material and excised human tumors. In this study, beef and lard,more » placed inside 2-mm diameter plastic Nalgene containers, were used as surrogates for fibroglandular and adipose tissue, respectively. The phantom was imaged at 125 kVp, 40 mA for 10 seconds each with a 1-mm pencil beam. The raw data were reconstructed using a model-based reconstruction algorithm and yielded the location and form factor, or momentum transfer (q) spectrum of the materials that were imaged. The measured material form factors were then compared to the ground truth measurements acquired by x-ray diffraction (XRD) imaging. Results: The tissue equivalent phantom was found to accurately model different types of breast tissue by qualitatively comparing our measured form factors to those of adipose and fibroglandular tissue from literature. Our imaging system has been able to define the location and composition of the various materials in the phantom. Conclusion: This work introduces a new tissue equivalent phantom for testing and optimization of our coherent scatter imaging system for material classification. In future studies, the phantom will enable the use of a variety of materials including excised human tissue specimens in evaluating and optimizing our imaging system using pencil- and fan-beam geometries. United States Department of Homeland Security Duke University Medical Center - Department of Radiology Carl E Ravin Advanced Imaging Laboratories Duke University Medical Physics Graduate Program.« less
NASA Technical Reports Server (NTRS)
Petty, G. W.
1994-01-01
Microwave rain rate retrieval algorithms have most often been formulated in terms of the raw brightness temperatures observed by one or more channels of a satellite radiometer. Taken individually, single-channel brightness temperatures generally represent a near-arbitrary combination of positive contributions due to liquid water emission and negative contributions due to scattering by ice and/or visibility of the radiometrically cold ocean surface. Unfortunately, for a given rain rate, emission by liquid water below the freezing level and scattering by ice particles above the freezing level are rather loosely coupled in both a physical and statistical sense. Furthermore, microwave brightness temperatures may vary significantly (approx. 30-70 K) in response to geophysical parameters other than liquid water and precipitation. Because of these complications, physical algorithms which attempt to directly invert observed brightness temperatures have typically relied on the iterative adjustment of detailed micro-physical profiles or cloud models, guided by explicit forward microwave radiative transfer calculations. In support of an effort to develop a significantly simpler and more efficient inversion-type rain rate algorithm, the physical information content of two linear transformations of single-frequency, dual-polarization brightness temperatures is studied: the normalized polarization difference P of Petty and Katsaros (1990, 1992), which is intended as a measure of footprint-averaged rain cloud transmittance for a given frequency; and a scattering index S (similar to the polarization corrected temperature of Spencer et al.,1989) which is sensitive almost exclusively to ice. A reverse Monte Carlo radiative transfer model is used to elucidate the qualitative response of these physically distinct single-frequency indices to idealized 3-dimensional rain clouds and to demonstrate their advantages over raw brightness temperatures both as stand-alone indices of precipitation activity and as primary variables in physical, multichannel rain rate retrieval schemes. As a byproduct of the present analysis, it is shown that conventional plane-parallel analyses of the well-known foot-print-filling problem for emission-based algorithms may in some cases give seriously misleading results.
Experimental and computational studies of electromagnetic cloaking at microwaves
NASA Astrophysics Data System (ADS)
Wang, Xiaohui
An invisibility cloak is a device that can hide the target by enclosing it from the incident radiation. This intriguing device has attracted a lot of attention since it was first implemented at a microwave frequency in 2006. However, the problems of existing cloak designs prevent them from being widely applied in practice. In this dissertation, we try to remove or alleviate the three constraints for practical applications imposed by loosy cloaking media, high implementation complexity, and small size of hidden objects compared to the incident wavelength. To facilitate cloaking design and experimental characterization, several devices and relevant techniques for measuring the complex permittivity of dielectric materials at microwave frequencies are developed. In particular, a unique parallel plate waveguide chamber has been set up to automatically map the electromagnetic (EM) field distribution for wave propagation through the resonator arrays and cloaking structures. The total scattering cross section of the cloaking structures was derived based on the measured scattering field by using this apparatus. To overcome the adverse effects of lossy cloaking media, microwave cloaks composed of identical dielectric resonators made of low loss ceramic materials are designed and implemented. The effective permeability dispersion was provided by tailoring dielectric resonator filling fractions. The cloak performances had been verified by full-wave simulation of true multi-resonator structures and experimental measurements of the fabricated prototypes. With the aim to reduce the implementation complexity caused by metamaterials employment for cloaking, we proposed to design 2-D cylindrical cloaks and 3-D spherical cloaks by using multi-layer ordinary dielectric material (epsilon r>1) coating. Genetic algorithm was employed to optimize the dielectric profiles of the cloaking shells to provide the minimum scattering cross sections of the cloaked targets. The designed cloaks can be easily scaled to various operating frequencies. The simulation results show that the multi-layer cylindrical cloak essentially outperforms the similarly sized metamaterials-based cloak designed by using the transformation optics-based reduced parameters. For the designed spherical cloak, the simulated scattering pattern shows that the total scattering cross section is greatly reduced. In addition, the scattering in specific directions could be significantly reduced. It is shown that the cloaking efficiency for larger targets could be improved by employing lossy materials in the shell. At last, we propose to hide a target inside a waveguide structure filled with only epsilon near zero materials, which are easy to implement in practice. The cloaking efficiency of this method, which was found to increase for large targets, has been confirmed both theoretically and by simulations.
NASA Astrophysics Data System (ADS)
Kubo, S.; Nishiura, M.; Tanaka, K.; Moseev, D.; Ogasawara, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Tsujimura, T. I.; Makino, R.
2016-06-01
High-power gyrotrons prepared for the electron cyclotron heating at 77 GHz has been used for a collective Thomson scattering (CTS) study in LHD. Due to the difficulty in removing fundamental and/or second harmonic resonance in the viewing line of sight, the subtraction of the background ECE from measured signal was performed by modulating the probe beam power from a gyrotron. The separation of the scattering component from the background has been performed successfully taking into account the response time difference between both high-energy and bulk components. The other separation was attempted by fast scanning the viewing beam across the probing beam. It is found that the intensity of the scattered spectrum corresponding to the bulk and high energy components were almost proportional to the calculated scattering volume in the relatively low density region, while appreciable background scattered component remains even in the off volume in some high density cases. The ray-trace code TRAVIS is used to estimate the change in the scattering volume due to probing and receiving beam deflection effect.
On the Forward Scattering of Microwave Breast Imaging
Lui, Hoi-Shun; Fhager, Andreas; Persson, Mikael
2012-01-01
Microwave imaging for breast cancer detection has been of significant interest for the last two decades. Recent studies focus on solving the imaging problem using an inverse scattering approach. Efforts have mainly been focused on the development of the inverse scattering algorithms, experimental setup, antenna design and clinical trials. However, the success of microwave breast imaging also heavily relies on the quality of the forward data such that the tumor inside the breast volume is well illuminated. In this work, a numerical study of the forward scattering data is conducted. The scattering behavior of simple breast models under different polarization states and aspect angles of illumination are considered. Numerical results have demonstrated that better data contrast could be obtained when the breast volume is illuminated using cross-polarized components in linear polarization basis or the copolarized components in the circular polarization basis. PMID:22611371
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
NASA Astrophysics Data System (ADS)
Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.
2018-05-01
Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Limei; Lei, Lei; Li, Feng
2017-10-01
A microfluidic-based multi-angle laser scattering (MALS) system capable of acquiring scattering patterns of a single particle is designed and demonstrated. The system includes a sheathless nozzle microfluidic glass chip, and an on-chip MALS unit being in alignment with the nozzle exit in the chip. The size and relative refractive indices (RI) of polystyrene (PS) microspheres were deduced with accuracies of 60 nm and 0.002 by comparing the experimental scattering patterns with theoretical ones. We measured scattering patterns of waterborne parasites i.e., Cryptosporidium parvum (C.parvum) and Giardia lamblia (G. lamblia), and some other representative species suspended in deionized water at a maximum flow rate of 12 μL/min, and a maximum of 3000 waterborne parasites can be identified within one minute with a mean accuracy higher than 96% by classification of distinctive scattering patterns using a support-vector-machine (SVM) algorithm. The system provides a promising tool for label-free detection of waterborne parasites and other biological contaminants.
Acoustic scattering reduction using layers of elastic materials
NASA Astrophysics Data System (ADS)
Dutrion, Cécile; Simon, Frank
2017-02-01
Making an object invisible to acoustic waves could prove useful for military applications or measurements in confined space. Different passive methods have been proposed in recent years to avoid acoustic scattering from rigid obstacles. These techniques are exclusively based on acoustic phenomena, and use for instance multiple resonators or scatterers. This paper examines the possibility of designing an acoustic cloak using a bi-layer elastic cylindrical shell to eliminate the acoustic field scattered from a rigid cylinder hit by plane waves. This field depends on the dimensional and mechanical characteristics of the elastic layers. It is computed by a semi-analytical code modelling the vibrations of the coating under plane wave excitation. Optimization by genetic algorithm is performed to determine the characteristics of a bi-layer material minimizing the scattering. Considering an external fluid consisting of air, realistic configurations of elastic coatings emerge, composed of a thick internal orthotopic layer and a thin external isotropic layer. These coatings are shown to enable scattering reduction at a precise frequency or over a larger frequency band.
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Schneider, Glenn; Weinberger, Alycia J.; Debes, John H.; Grady, Carol A.; Jang-Condell, Hannah; Kuchner, Marc J.
2014-01-01
New multi-roll coronagraphic images of the HD181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/ deficits. The measured empirical scattering phase function for the disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain.We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass greater than 1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Kuchner, Marc J.; Schneider, Glenn
New multi-roll coronagraphic images of the HD 181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/deficits. The measured empirical scattering phase function for themore » disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain. We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass >1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.« less
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Schneider, Glenn; Weinberger, Alycia J.; Debes, John H.; Grady, Carol A.; Jang-Condell, Hannah; Kuchner, Marc J.
2014-07-01
New multi-roll coronagraphic images of the HD 181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/deficits. The measured empirical scattering phase function for the disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain. We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass >1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.
Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility
NASA Astrophysics Data System (ADS)
Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu
2013-11-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.
Artificial bee colony algorithm for single-trial electroencephalogram analysis.
Hsu, Wei-Yen; Hu, Ya-Ping
2015-04-01
In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.
NASA Astrophysics Data System (ADS)
Wibisana, H.; Zainab, S.; Dara K., A.
2018-01-01
Chlorophyll-a is one of the parameters used to detect the presence of fish populations, as well as one of the parameters to state the quality of a water. Research on chlorophyll concentrations has been extensively investigated as well as with chlorophyll-a mapping using remote sensing satellites. Mapping of chlorophyll concentration is used to obtain an optimal picture of the condition of waters that is often used as a fishing area by the fishermen. The role of remote sensing is a technological breakthrough in broadly monitoring the condition of waters. And in the process to get a complete picture of the aquatic conditions it would be used an algorithm that can provide an image of the concentration of chlorophyll at certain points scattered in the research area of capture fisheries. Remote sensing algorithms have been widely used by researchers to detect the presence of chlorophyll content, where the channels corresponding to the mapping of chlorophyll -concentrations from Landsat 8 images are canals 4, 3 and 2. With multiple channels from Landsat-8 satellite imagery used for chlorophyll detection, optimum algorithmic search can be formulated to obtain maximum results of chlorophyll-a concentration in the research area. From the calculation of remote sensing algorithm hence can be known the suitable algorithm for condition at coast of Pasuruan, where green channel give good enough correlation equal to R2 = 0,853 with algorithm for Chlorophyll-a (mg / m3) = 0,093 (R (-0) Red - 3,7049, from this result it can be concluded that there is a good correlation of the green channel that can illustrate the concentration of chlorophyll scattered along the coast of Pasuruan
Chlorophyll-a specific volume scattering function of phytoplankton.
Tan, Hiroyuki; Oishi, Tomohiko; Tanaka, Akihiko; Doerffer, Roland; Tan, Yasuhiro
2017-06-12
Chlorophyll-a specific light volume scattering functions (VSFs) by cultured phytoplankton in visible spectrum range is presented. Chlorophyll-a specific VSFs were determined based on the linear least squares method using a measured VSFs with different chlorophyll-a concentrations. We found obvious variability of it in terms of spectral and angular shapes of VSF between cultures. It was also presented that chlorophyll-a specific scattering significantly affected on spectral variation of the remote sensing reflectance, depending on spectral shape of b. This result is useful for developing an advance algorithm of ocean color remote sensing and for deep understanding of light in the sea.
Hyperspectral retrieval of surface reflectances: A new scheme
NASA Astrophysics Data System (ADS)
Thelen, Jean-Claude; Havemann, Stephan
2013-05-01
Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space borne, hyperspectral imagers. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes.
Application of the Finite Element Method in Atomic and Molecular Physics
NASA Technical Reports Server (NTRS)
Shertzer, Janine
2007-01-01
The finite element method (FEM) is a numerical algorithm for solving second order differential equations. It has been successfully used to solve many problems in atomic and molecular physics, including bound state and scattering calculations. To illustrate the diversity of the method, we present here details of two applications. First, we calculate the non-adiabatic dipole polarizability of Hi by directly solving the first and second order equations of perturbation theory with FEM. In the second application, we calculate the scattering amplitude for e-H scattering (without partial wave analysis) by reducing the Schrodinger equation to set of integro-differential equations, which are then solved with FEM.
NASA Astrophysics Data System (ADS)
Chen, Wei; Guo, Li-xin; Li, Jiang-ting
2017-04-01
This study analyzes the scattering characteristics of obliquely incident electromagnetic (EM) waves in a time-varying plasma sheath. The finite-difference time-domain algorithm is applied. According to the empirical formula of the collision frequency in a plasma sheath, the plasma frequency, temperature, and pressure are assumed to vary with time in the form of exponential rise. Some scattering problems of EM waves are discussed by calculating the radar cross section (RCS) of the time-varying plasma. The laws of the RCS varying with time are summarized at the L and S wave bands.
NASA Astrophysics Data System (ADS)
Chen, X. W.; Zhao, C. Y.; Wang, B. X.
2018-05-01
Thermal barrier coatings are common porous materials coated on the surface of devices operating under high temperatures and designed for heat insulation. This study presents a comprehensive investigation on the microstructural effect on radiative scattering coefficient and asymmetry factor of anisotropic thermal barrier coatings. Based on the quartet structure generation set algorithm, the finite-difference-time-domain method is applied to calculate angular scattering intensity distribution of complicated random microstructure, which takes wave nature into account. Combining Monte Carlo method with Particle Swarm Optimization, asymmetry factor, scattering coefficient and absorption coefficient are retrieved simultaneously. The retrieved radiative properties are identified with the angular scattering intensity distribution under different pore shapes, which takes dependent scattering and anisotropic pore shape into account implicitly. It has been found that microstructure significantly affects the radiative properties in thermal barrier coatings. Compared with spherical shape, irregular anisotropic pore shape reduces the forward scattering peak. The method used in this paper can also be applied to other porous media, which designs a frame work for further quantitative study on porous media.
Analysis on the electromagnetic scattering properties of crops at multi-band
NASA Astrophysics Data System (ADS)
Wu, Tao; Wu, Zhensen; Liu, Xiaoyi
2014-12-01
The vector radiative transfer (VRT) theory for active microwave remote sensing and Rayleigh-Gans approximation (GRG) are applied in the study, and an iterative algorithm is used to solve the RT equations, thus we obtain the zeroorder and first-order equation for numerical results. The Michigan Microwave Canopy Scattering (MIMICS) model is simplified to adapt to the crop model, by analyzing body-surface bistatic scattering and backscattering properties between a layer of soybean or wheat consisting of stems and leaves and different underlying soil surface at multi-band (i.e. P, L, S, X, Ku-band), we obtain microwave scattering mechanisms of crop components and the effect of underlying ground on total crop scattering. Stem and leaf are regard as a needle and a circular disk, respectively. The final results are compared with some literature data to verify our calculating method, numerical results show multi-band crop microwave scattering properties differ from scattering angle, azimuth angle and moisture of vegetation and soil, which offer the part needed information for the design of future bistatic radar systems for crop sensing applications.
A fast, robust algorithm for power line interference cancellation in neural recording.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-04-01
Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.
A fast, robust algorithm for power line interference cancellation in neural recording
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-04-01
Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.
Retrieval of Soil Moisture and Roughness from the Polarimetric Radar Response
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Ulaby, Fawwaz T.
1997-01-01
The main objective of this investigation was the characterization of soil moisture using imaging radars. In order to accomplish this task, a number of intermediate steps had to be undertaken. In this proposal, the theoretical, numerical, and experimental aspects of electromagnetic scattering from natural surfaces was considered with emphasis on remote sensing of soil moisture. In the general case, the microwave backscatter from natural surfaces is mainly influenced by three major factors: (1) the roughness statistics of the soil surface, (2) soil moisture content, and (3) soil surface cover. First the scattering problem from bare-soil surfaces was considered and a hybrid model that relates the radar backscattering coefficient to soil moisture and surface roughness was developed. This model is based on extensive experimental measurements of the radar polarimetric backscatter response of bare soil surfaces at microwave frequencies over a wide range of moisture conditions and roughness scales in conjunction with existing theoretical surface scattering models in limiting cases (small perturbation, physical optics, and geometrical optics models). Also a simple inversion algorithm capable of providing accurate estimates of soil moisture content and surface rms height from single-frequency multi-polarization radar observations was developed. The accuracy of the model and its inversion algorithm is demonstrated using independent data sets. Next the hybrid model for bare-soil surfaces is made fully polarimetric by incorporating the parameters of the co- and cross-polarized phase difference into the model. Experimental data in conjunction with numerical simulations are used to relate the soil moisture content and surface roughness to the phase difference statistics. For this purpose, a novel numerical scattering simulation for inhomogeneous dielectric random surfaces was developed. Finally the scattering problem of short vegetation cover above a rough soil surface was considered. A general scattering model for grass-blades of arbitrary cross section was developed and incorporated in a first order random media model. The vegetation model and the bare-soil model are combined and the accuracy of the combined model is evaluated against experimental observations from a wheat field over the entire growing season. A complete set of ground-truth data and polarimetric backscatter data were collected. Also an inversion algorithm for estimating soil moisture and surface roughness from multi-polarized multi-frequency observations of vegetation-covered ground is developed.
Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato
2014-01-01
Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886
Reference-Free Removal of EEG-fMRI Ballistocardiogram Artifacts with Harmonic Regression
Krishnaswamy, Pavitra; Bonmassar, Giorgio; Poulsen, Catherine; Pierce, Eric T; Purdon, Patrick L.; Brown, Emery N.
2016-01-01
Combining electroencephalogram (EEG) recording and functional magnetic resonance imaging (fMRI) offers the potential for imaging brain activity with high spatial and temporal resolution. This potential remains limited by the significant ballistocardiogram (BCG) artifacts induced in the EEG by cardiac pulsation-related head movement within the magnetic field. We model the BCG artifact using a harmonic basis, pose the artifact removal problem as a local harmonic regression analysis, and develop an efficient maximum likelihood algorithm to estimate and remove BCG artifacts. Our analysis paradigm accounts for time-frequency overlap between the BCG artifacts and neurophysiologic EEG signals, and tracks the spatiotemporal variations in both the artifact and the signal. We evaluate performance on: simulated oscillatory and evoked responses constructed with realistic artifacts; actual anesthesia-induced oscillatory recordings; and actual visual evoked potential recordings. In each case, the local harmonic regression analysis effectively removes the BCG artifacts, and recovers the neurophysiologic EEG signals. We further show that our algorithm outperforms commonly used reference-based and component analysis techniques, particularly in low SNR conditions, the presence of significant time-frequency overlap between the artifact and the signal, and/or large spatiotemporal variations in the BCG. Because our algorithm does not require reference signals and has low computational complexity, it offers a practical tool for removing BCG artifacts from EEG data recorded in combination with fMRI. PMID:26151100
3D Compton scattering imaging and contour reconstruction for a class of Radon transforms
NASA Astrophysics Data System (ADS)
Rigaud, Gaël; Hahn, Bernadette N.
2018-07-01
Compton scattering imaging is a nascent concept arising from the current development of high-sensitive energy detectors and is devoted to exploit the scattering radiation to image the electron density of the studied medium. Such detectors are able to collect incoming photons in terms of energy. This paper introduces potential 3D modalities in Compton scattering imaging (CSI). The associated measured data are modeled using a class of generalized Radon transforms. The study of this class of operators leads to build a filtered back-projection kind algorithm preserving the contours of the sought-for function and offering a fast approach to partially solve the associated inverse problems. Simulation results including Poisson noise demonstrate the potential of this new imaging concept as well as the proposed image reconstruction approach.
A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines
Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert
2012-01-01
We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845
Diffraction data of core-shell nanoparticles from an X-ray free electron laser
Li, Xuanxuan; Chiu, Chun -Ya; Wang, Hsiang -Ju; ...
2017-04-11
X-ray free-electron lasers provide novel opportunities to conduct single particle analysis on nanoscale particles. Coherent diffractive imaging experiments were performed at the Linac Coherent Light Source (LCLS), SLAC National Laboratory, exposing single inorganic core-shell nanoparticles to femtosecond hard-X-ray pulses. Each facetted nanoparticle consisted of a crystalline gold core and a differently shaped palladium shell. Scattered intensities were observed up to about 7 nm resolution. Analysis of the scattering patterns revealed the size distribution of the samples, which is consistent with that obtained from direct real-space imaging by electron microscopy. Furthermore, scattering patterns resulting from single particles were selected and compiledmore » into a dataset which can be valuable for algorithm developments in single particle scattering research.« less
Bor, E; Turduev, M; Kurt, H
2016-08-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
Bor, E.; Turduev, M.; Kurt, H.
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Somers, Ben; Bertrand, Alexander
2016-12-01
Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
NASA Astrophysics Data System (ADS)
Somers, Ben; Bertrand, Alexander
2016-12-01
Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
The atmospheric correction algorithm for HY-1B/COCTS
NASA Astrophysics Data System (ADS)
He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun
2008-10-01
China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.
Spin-analyzed SANS for soft matter applications
NASA Astrophysics Data System (ADS)
Chen, W. C.; Barker, J. G.; Jones, R.; Krycka, K. L.; Watson, S. M.; Gagnon, C.; Perevozchivoka, T.; Butler, P.; Gentile, T. R.
2017-06-01
The small angle neutron scattering (SANS) of nearly Q-independent nuclear spin-incoherent scattering from hydrogen present in most soft matter and biology samples may raise an issue in structure determination in certain soft matter applications. This is true at high wave vector transfer Q where coherent scattering is much weaker than the nearly Q-independent spin-incoherent scattering background. Polarization analysis is capable of separating coherent scattering from spin-incoherent scattering, hence potentially removing the nearly Q-independent background. Here we demonstrate SANS polarization analysis in conjunction with the time-of-flight technique for separation of coherent and nuclear spin-incoherent scattering for a sample of silver behenate back-filled with light water. We describe a complete procedure for SANS polarization analysis for separating coherent from incoherent scattering for soft matter samples that show inelastic scattering. Polarization efficiency correction and subsequent separation of the coherent and incoherent scattering have been done with and without a time-of-flight technique for direct comparisons. In addition, we have accounted for the effect of multiple scattering from light water to determine the contribution of nuclear spin-incoherent scattering in both the spin flip channel and non-spin flip channel when performing SANS polarization analysis. We discuss the possible gain in the signal-to-noise ratio for the measured coherent scattering signal using polarization analysis with the time-of-flight technique compared with routine unpolarized SANS measurements.
Open-cycle OTEC system performance analysis. [Claude cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, A.A.; Olson, D.A.; Johnson, D.H.
1980-10-01
An algorithm developed to calculate the performance of Claude-Cycle ocean thermal energy conversion (OTEC) systems is described. The algorithm treats each component of the system separately and then interfaces them to form a complete system, allowing a component to be changed without changing the rest of the algorithm. Two components that are subject to change are the evaporator and condenser. For this study we developed mathematical models of a channel-flow evaporator and both a horizontal jet and spray director contact condenser. The algorithm was then programmed to run on SERI's CDC 7600 computer and used to calculate the effect onmore » performance of deaerating the warm and cold water streams before entering the evaporator and condenser, respectively. This study indicates that there is no advantage to removing air from these streams compared with removing the air from the condenser.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Comparison of Classical and Lazy Approach in SCG Compiler
NASA Astrophysics Data System (ADS)
Jirák, Ota; Kolář, Dušan
2011-09-01
The existing parsing methods of scattered context grammar usually expand nonterminals deeply in the pushdown. This expansion is implemented by using either a linked list, or some kind of an auxiliary pushdown. This paper describes the parsing algorithm of an LL(1) scattered context grammar. The given algorithm merges two principles together. The first approach is a table-driven parsing method commonly used for parsing of the context-free grammars. The second is a delayed execution used in functional programming. The main part of this paper is a proof of equivalence between the common principle (the whole rule is applied at once) and our approach (execution of the rules is delayed). Therefore, this approach works with the pushdown top only. In the most cases, the second approach is faster than the first one. Finally, the future work is discussed.
Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong
2014-04-07
Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.
Development of a Compton camera for prompt-gamma medical imaging
NASA Astrophysics Data System (ADS)
Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.
2017-11-01
A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.
NASA Astrophysics Data System (ADS)
Vajedian, S.; Motagh, M.; Nilfouroushan, F.
2013-09-01
InSAR capacity to detect slow deformation over terrain areas is limited by temporal and geometric decorrelations. Multitemporal InSAR techniques involving Persistent Scatterer (Ps-InSAR) and Small Baseline (SBAS) are recently developed to compensate the decorrelation problems. Geometric decorrelation in mountainous areas especially for Envisat images makes phase unwrapping process difficult. To improve this unwrapping problem, we first modified phase filtering to make the wrapped phase image as smooth as possible. In addition, in order to improve unwrapping results, a modified unwrapping method has been developed. This method includes removing possible orbital and tropospheric effects. Topographic correction is done within three-dimensional unwrapping, Orbital and tropospheric corrections are done after unwrapping process. To evaluate the effectiveness of our improved method we tested the proposed algorithm by Envisat and ALOS dataset and compared our results with recently developed PS software (StaMAPS). In addition we used GPS observations for evaluating the modified method. The results indicate that our method improves the estimated deformation significantly.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.
Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics
NASA Astrophysics Data System (ADS)
Soldovieri, F.
2009-04-01
Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
Nonlocal variational model and filter algorithm to remove multiplicative noise
NASA Astrophysics Data System (ADS)
Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi
2010-07-01
The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Richard, Jonathan T.; Everitt, Henry O.
2017-11-01
A rail-mounted synthetic aperture radar has been constructed to operate at W-band (75 - 110 GHz) and a THz band (325 - 500 GHz) in order to ascertain its ability to locate isolated small, visually obscured metallic scatterers embedded in highly scattering dielectric hosts that are either semi-transparent or opaque. A top view 2D algorithm was used to reconstruct scenes from the acquired data, locating metallic scatterers at W-band with high range and cross-range resolution of 4.3 and 2 mm, respectively, and with improved range resolution of 0.86 mm at the THz band. Millimeter-sized metallic scatterers were easily located when embedded in semi-transparent, highly scattering target hosts of Styrofoam and waxy packing foam but were more difficult to locate when embedded in relatively opaque, highly scattering Celotex panels. Although the THz band provided the expected greater spatial resolution, it required the target to be moved closer to the rail and had a more limited field of view that prevented some targets from being identified. Techniques for improving the signal to noise ratio are discussed. This work establishes a path for developing techniques to render a complete 3D reconstruction of a scene.
NASA Astrophysics Data System (ADS)
Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming
2018-04-01
An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.
Baseline-Subtraction-Free (BSF) Damage-Scattered Wave Extraction for Stiffened Isotropic Plates
NASA Technical Reports Server (NTRS)
He, Jiaze; Leser, Patrick E.; Leser, William P.
2017-01-01
Lamb waves enable long distance inspection of structures for health monitoring purposes. However, this capability is diminished when applied to complex structures where damage-scattered waves are often buried by scattering from various structural components or boundaries in the time-space domain. Here, a baseline-subtraction-free (BSF) inspection concept based on the Radon transform (RT) is proposed to identify and separate these scattered waves from those scattered by damage. The received time-space domain signals can be converted into the Radon domain, in which the scattered signals from structural components are suppressed into relatively small regions such that damage-scattered signals can be identified and extracted. In this study, a piezoelectric wafer and a linear scan via laser Doppler vibrometer (LDV) were used to excite and acquire the Lamb-wave signals in an aluminum plate with multiple stiffeners. Linear and inverse linear Radon transform algorithms were applied to the direct measurements. The results demonstrate the effectiveness of the Radon transform as a reliable extraction tool for damage-scattered waves in a stiffened aluminum plate and also suggest the possibility of generalizing this technique for application to a wide variety of complex, large-area structures.
Global Monitoring of Clouds and Aerosols Using a Network of Micro-Pulse Lidar Systems
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Spinhirne, James D.; Scott, V. Stanley
2000-01-01
Long-term global radiation programs, such as AERONET and BSRN, have shown success in monitoring column averaged cloud and aerosol optical properties. Little attention has been focused on global measurements of vertically resolved optical properties. Lidar systems are the preferred instrument for such measurements. However, global usage of lidar systems has not been achieved because of limits imposed by older systems that were large, expensive, and logistically difficult to use in the field. Small, eye-safe, and autonomous lidar systems are now currently available and overcome problems associated with older systems. The first such lidar to be developed is the Micro-pulse lidar System (MPL). The MPL has proven to be useful in the field because it can be automated, runs continuously (day and night), is eye-safe, can easily be transported and set up, and has a small field-of-view which removes multiple scattering concerns. We have developed successful protocols to operate and calibrate MPL systems. We have also developed a data analysis algorithm that produces data products such as cloud and aerosol layer heights, optical depths, extinction profiles, and the extinction-backscatter ratio. The algorithm minimizes the use of a priori assumptions and also produces error bars for all data products. Here we present an overview of our MPL protocols and data analysis techniques. We also discuss the ongoing construction of a global MPL network in conjunction with the AERONET program. Finally, we present some early results from the MPL network.
Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields.
Maranville, Brian B; Kirby, Brian J; Grutter, Alexander J; Kienzle, Paul A; Majkrzak, Charles F; Liu, Yaohua; Dennis, Cindi L
2016-08-01
The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample, however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. The theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement.
Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields
Maranville, Brian B.; Kirby, Brian J.; Grutter, Alexander J.; Kienzle, Paul A.; Majkrzak, Charles F.; Liu, Yaohua; Dennis, Cindi L.
2016-01-01
The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample, however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. The theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement. PMID:27504074
Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields
Maranville, Brian B.; Kirby, Brian J.; Grutter, Alexander J.; ...
2016-06-09
The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample,more » however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. In conclusion, the theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement.« less
Research on laser marking speed optimization by using genetic algorithm.
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.
Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification
NASA Astrophysics Data System (ADS)
Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.
2016-10-01
Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.
A community detection algorithm based on structural similarity
NASA Astrophysics Data System (ADS)
Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu
2017-09-01
In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.
Transmittance and scattering during wound healing after refractive surgery
NASA Astrophysics Data System (ADS)
Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.
2004-10-01
Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.
Exploitation of Microdoppler and Multiple Scattering Phenomena for Radar Target Recognition
2006-08-24
is tested with measurement data. The resulting GPR images demonstrate the effectiveness of the proposed algorithm. INTRODUCTION Subsurface imaging to...utilizes the fast Fourier . transform (FFT) to expedite the imaging GPR. Recently, we re- .... ported a fast and effective SAR-based subsurface ... imaging tech- nique that can provide good resolutions in both the range and cross-range domains I111. Our algorithm differs from Witten’s [91 and Hansen’s
BayesMotif: de novo protein sorting motif discovery from impure datasets.
Hu, Jianjun; Zhang, Fan
2010-01-18
Protein sorting is the process that newly synthesized proteins are transported to their target locations within or outside of the cell. This process is precisely regulated by protein sorting signals in different forms. A major category of sorting signals are amino acid sub-sequences usually located at the N-terminals or C-terminals of protein sequences. Genome-wide experimental identification of protein sorting signals is extremely time-consuming and costly. Effective computational algorithms for de novo discovery of protein sorting signals is needed to improve the understanding of protein sorting mechanisms. We formulated the protein sorting motif discovery problem as a classification problem and proposed a Bayesian classifier based algorithm (BayesMotif) for de novo identification of a common type of protein sorting motifs in which a highly conserved anchor is present along with a less conserved motif regions. A false positive removal procedure is developed to iteratively remove sequences that are unlikely to contain true motifs so that the algorithm can identify motifs from impure input sequences. Experiments on both implanted motif datasets and real-world datasets showed that the enhanced BayesMotif algorithm can identify anchored sorting motifs from pure or impure protein sequence dataset. It also shows that the false positive removal procedure can help to identify true motifs even when there is only 20% of the input sequences containing true motif instances. We proposed BayesMotif, a novel Bayesian classification based algorithm for de novo discovery of a special category of anchored protein sorting motifs from impure datasets. Compared to conventional motif discovery algorithms such as MEME, our algorithm can find less-conserved motifs with short highly conserved anchors. Our algorithm also has the advantage of easy incorporation of additional meta-sequence features such as hydrophobicity or charge of the motifs which may help to overcome the limitations of PWM (position weight matrix) motif model.
NASA Astrophysics Data System (ADS)
Pearson, David
A linear accelerator manufactured by Elekta, equipped with a multi leaf collimation (MLC) system has been modelled using Monte Carlo simulations with the photon flattening filter removed. The purpose of this investigation was to show that more efficient and more accurate Intensity Modulated Radiation Therapy (IMRT) treatments can be delivered from a standard linear accelerator with the flattening filter removed from the beam. A range of simulations of 6 MV and 10 MV photon were studied and compared to a model of a standard accelerator which included the flattening filter for those beams. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. We show here that with the flattening filter removed, an increase to the dose on the central axis by a factor of 2.35 and 4.18 is achieved for 6 MV and 10 MV photon beams respectively using a standard 10x 10cm2 field size. A comparison of the dose at points at the field edges led to the result that, removal of the flattening filter reduced the dose at these points by approximately 10% for the 6 MV beam over the clinical range of field sizes. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than dmax. Also studied was the electron contamination brought about by the removal of the filter. To reduce this electron contamination and thus reduce the skin dose to the patient we consider the use of an electron scattering foil in the beam path. The electron scattering foil had very little effect on dmax. From simulations of a standard 6MV beam, a filter-free beam and a filter-free beam with electron scattering foil, we deduce that the proportion of electrons in the photon beam is 0.35%, 0.28% and 0.27%, consecutively. In short, higher dose rates will result in decreased treatment times and the reduced dose outside of the field is indicative of reducing the dose to the surrounding tissue. Electron contamination was found to be comparable with conventional IMRT treatments carried out with a flattening filter.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... make the STP modifiers available to algorithms used by Floor brokers to route interest to the Exchange..., pegging e- Quotes, and g-Quotes entered into the matching engine by an algorithm on behalf of a Floor... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... modifiers available to algorithms used by Floor brokers to route interest to the Exchange's matching engine...-Quotes entered into the matching engine by an algorithm on behalf of a Floor broker. STP modifiers would... algorithms removes impediments to and perfects the mechanism of a free and open market because there is a...
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.
An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS
NASA Astrophysics Data System (ADS)
Lin, Chin-Teng; Yang, Chien-Ting; Shou, Yu-Wen; Shen, Tzu-Kuei
2010-12-01
We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4% ~ 10% for our three tested videos in the experimental results of vehicle counting.
Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin
2017-02-16
The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.
Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Shelhamer, M.
2001-01-01
It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Material parameter measurements at high temperatures
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.; Peters, L., Jr.
1988-01-01
Alternate fixtures of techniques for the measurement of the constitutive material parameters at elevated temperatures are presented. The technique utilizes scattered field data from material coated cylinders between parallel plates or material coated hemispheres over a finite size groundplane. The data acquisition is centered around the HP 8510B Network Analyzer. The parameters are then found from a numerical search algorithm using the Newton-Ralphson technique with the measured and calculated fields from these canonical scatters. Numerical and experimental results are shown.
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Han, Dahai; Gu, Yanjie; Zhang, Min
2017-08-10
An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.
NASA Astrophysics Data System (ADS)
Reichman, Daniël.; Collins, Leslie M.; Malof, Jordan M.
2018-04-01
This work focuses on the development of automatic buried threat detection (BTD) algorithms using ground penetrating radar (GPR) data. Buried threats tend to exhibit unique characteristics in GPR imagery, such as high energy hyperbolic shapes, which can be leveraged for detection. Many recent BTD algorithms are supervised, and therefore they require training with exemplars of GPR data collected over non-threat locations and threat locations, respectively. Frequently, data from non-threat GPR examples will exhibit high energy hyperbolic patterns, similar to those observed from a buried threat. Is it still useful therefore, to include such examples during algorithm training, and encourage an algorithm to label such data as a non-threat? Similarly, some true buried threat examples exhibit very little distinctive threat-like patterns. We investigate whether it is beneficial to treat such GPR data examples as mislabeled, and either (i) relabel them, or (ii) remove them from training. We study this problem using two algorithms to automatically identify mislabeled examples, if they are present, and examine the impact of removing or relabeling them for training. We conduct these experiments on a large collection of GPR data with several state-of-the-art GPR-based BTD algorithms.
Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Perlovsky, L.; Deming, R. W.; Sotnikov, V.
2010-11-01
In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.
Comparative study of bowtie and patient scatter in diagnostic CT
NASA Astrophysics Data System (ADS)
Prakash, Prakhar; Boudry, John M.
2017-03-01
A fast, GPU accelerated Monte Carlo engine for simulating relevant photon interaction processes over the diagnostic energy range in third-generation CT systems was developed to study the relative contributions of bowtie and object scatter to the total scatter reaching an imaging detector. Primary and scattered projections for an elliptical water phantom (major axis set to 300mm) with muscle and fat inserts were simulated for a typical diagnostic CT system as a function of anti-scatter grid (ASG) configurations. The ASG design space explored grid orientation, i.e. septa either a) parallel or b) parallel and perpendicular to the axis of rotation, as well as septa height. The septa material was Tungsten. The resulting projections were reconstructed and the scatter induced image degradation was quantified using common CT image metrics (such as Hounsfield Unit (HU) inaccuracy and loss in contrast), along with a qualitative review of image artifacts. Results indicate object scatter dominates total scatter in the detector channels under the shadow of the imaged object with the bowtie scatter fraction progressively increasing towards the edges of the object projection. Object scatter was shown to be the driving factor behind HU inaccuracy and contrast reduction in the simulated images while shading artifacts and elevated loss in HU accuracy at the object boundary were largely attributed to bowtie scatter. Because the impact of bowtie scatter could not be sufficiently mitigated with a large grid ratio ASG, algorithmic correction may be necessary to further mitigate these artifacts.
An automated skin segmentation of Breasts in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Lee, Chia-Yen; Chang, Tzu-Fang; Chang, Nai-Yun; Chang, Yeun-Chung
2018-04-18
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used to diagnose breast disease. Obtaining anatomical information from DCE-MRI requires the skin be manually removed so that blood vessels and tumors can be clearly observed by physicians and radiologists; this requires considerable manpower and time. We develop an automated skin segmentation algorithm where the surface skin is removed rapidly and correctly. The rough skin area is segmented by the active contour model, and analyzed in segments according to the continuity of the skin thickness for accuracy. Blood vessels and mammary glands are retained, which remedies the defect of removing some blood vessels in active contours. After three-dimensional imaging, the DCE-MRIs without the skin can be used to see internal anatomical information for clinical applications. The research showed the Dice's coefficients of the 3D reconstructed images using the proposed algorithm and the active contour model for removing skins are 93.2% and 61.4%, respectively. The time performance of segmenting skins automatically is about 165 times faster than manually. The texture information of the tumors position with/without the skin is compared by the paired t-test yielded all p < 0.05, which suggested the proposed algorithm may enhance observability of tumors at the significance level of 0.05.
Geometrical-optics approximation of forward scattering by coated particles.
Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang
2004-03-20
By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.
A New Code SORD for Simulation of Polarized Light Scattering in the Earth Atmosphere
NASA Technical Reports Server (NTRS)
Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent
2016-01-01
We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel atmosphere of the Earth. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/.
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831