Sample records for regularized richardson-lucy algorithm

  1. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  2. Restoration of motion blurred image with Lucy-Richardson algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jing; Liu, Zhao Hui; Zhou, Liang

    2015-10-01

    Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.

  3. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  4. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  5. A joint Richardson-Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data.

    PubMed

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-16

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  6. Image quality improvement in optical coherence tomography using Lucy-Richardson deconvolution algorithm.

    PubMed

    Hojjatoleslami, S A; Avanaki, M R N; Podoleanu, A Gh

    2013-08-10

    Optical coherence tomography (OCT) has the potential for skin tissue characterization due to its high axial and transverse resolution and its acceptable depth penetration. In practice, OCT cannot reach the theoretical resolutions due to imperfections of some of the components used. One way to improve the quality of the images is to estimate the point spread function (PSF) of the OCT system and deconvolve it from the output images. In this paper, we investigate the use of solid phantoms to estimate the PSF of the imaging system. We then utilize iterative Lucy-Richardson deconvolution algorithm to improve the quality of the images. The performance of the proposed algorithm is demonstrated on OCT images acquired from a variety of samples, such as epoxy-resin phantoms, fingertip skin and basaloid larynx and eyelid tissues.

  7. A Novel Richardson-Lucy Model with Dictionary Basis and Spatial Regularization for Isolating Isotropic Signals.

    PubMed

    Xu, Tiantian; Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan

    2017-01-01

    Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy.

  8. A Novel Richardson-Lucy Model with Dictionary Basis and Spatial Regularization for Isolating Isotropic Signals

    PubMed Central

    Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan

    2017-01-01

    Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy. PMID:28081561

  9. Spectral restoration in high resolution electron energy loss spectroscopy based on iterative semi-blind Lucy-Richardson algorithm applied to rutile surfaces.

    PubMed

    Lazzari, Rémi; Li, Jingfeng; Jupille, Jacques

    2015-01-01

    A new spectral restoration algorithm of reflection electron energy loss spectra is proposed. It is based on the maximum likelihood principle as implemented in the iterative Lucy-Richardson approach. Resolution is enhanced and point spread function recovered in a semi-blind way by forcing cyclically the zero loss to converge towards a Dirac peak. Synthetic phonon spectra of TiO2 are used as a test bed to discuss resolution enhancement, convergence benefit, stability towards noise, and apparatus function recovery. Attention is focused on the interplay between spectral restoration and quasi-elastic broadening due to free carriers. A resolution enhancement by a factor up to 6 on the elastic peak width can be obtained on experimental spectra of TiO2(110) and helps revealing mixed phonon/plasmon excitations.

  10. Scaled Heavy-Ball Acceleration of the Richardson-Lucy Algorithm for 3D Microscopy Image Restoration.

    PubMed

    Wang, Hongbin; Miller, Paul C

    2014-02-01

    The Richardson-Lucy algorithm is one of the most important in image deconvolution. However, a drawback is its slow convergence. A significant acceleration was obtained using the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the image processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the heavy-ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has a proof of the convergence rate of O(K(-2)), where k is the number of iterations. We demonstrate the superior convergence performance, by a speedup factor of five, of the scaled H-B method on both synthetic and real 3D images.

  11. Spectral restoration in high resolution electron energy loss spectroscopy based on iterative semi-blind Lucy-Richardson algorithm applied to rutile surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazzari, Rémi, E-mail: remi.lazzari@insp.jussieu.fr; Li, Jingfeng, E-mail: jingfeng.li@insp.jussieu.fr; Jupille, Jacques, E-mail: jacques.jupille@insp.jussieu.fr

    2015-01-15

    A new spectral restoration algorithm of reflection electron energy loss spectra is proposed. It is based on the maximum likelihood principle as implemented in the iterative Lucy-Richardson approach. Resolution is enhanced and point spread function recovered in a semi-blind way by forcing cyclically the zero loss to converge towards a Dirac peak. Synthetic phonon spectra of TiO{sub 2} are used as a test bed to discuss resolution enhancement, convergence benefit, stability towards noise, and apparatus function recovery. Attention is focused on the interplay between spectral restoration and quasi-elastic broadening due to free carriers. A resolution enhancement by a factor upmore » to 6 on the elastic peak width can be obtained on experimental spectra of TiO{sub 2}(110) and helps revealing mixed phonon/plasmon excitations.« less

  12. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  13. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  14. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  15. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Kaminski, Clemens F.

    2015-03-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  16. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  17. A new method to analyze UV stellar occultation data

    NASA Astrophysics Data System (ADS)

    Evdokimova, D.; Baggio, L.; Montmessin, F.; Belyaev, D.; Bertaux, J.-L.

    2017-09-01

    In this paper we present a new method of data processing and a classification of different types of stray light at SPICAV UV stellar occultations. The method was developed on a basis of Richardson-Lucy algorithm including: (a) deconvolution process of measured star light and (b) separation of extra emissions registered by the spectrometer.

  18. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  19. Toward 10 meV electron energy-loss spectroscopy resolution for plasmonics.

    PubMed

    Bellido, Edson P; Rossouw, David; Botton, Gianluigi A

    2014-06-01

    Energy resolution is one of the most important parameters in electron energy-loss spectroscopy. This is especially true for measurement of surface plasmon resonances, where high-energy resolution is crucial for resolving individual resonance peaks, in particular close to the zero-loss peak. In this work, we improve the energy resolution of electron energy-loss spectra of surface plasmon resonances, acquired with a monochromated beam in a scanning transmission electron microscope, by the use of the Richardson-Lucy deconvolution algorithm. We test the performance of the algorithm in a simulated spectrum and then apply it to experimental energy-loss spectra of a lithographically patterned silver nanorod. By reduction of the point spread function of the spectrum, we are able to identify low-energy surface plasmon peaks in spectra, more localized features, and higher contrast in surface plasmon energy-filtered maps. Thanks to the combination of a monochromated beam and the Richardson-Lucy algorithm, we improve the effective resolution down to 30 meV, and evidence of success up to 10 meV resolution for losses below 1 eV. We also propose, implement, and test two methods to limit the number of iterations in the algorithm. The first method is based on noise measurement and analysis, while in the second we monitor the change of slope in the deconvolved spectrum.

  20. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  1. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  2. VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)

    NASA Astrophysics Data System (ADS)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.

    2015-05-01

    This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).

  3. Super-resolution technique for CW lidar using Fourier transform reordering and Richardson-Lucy deconvolution.

    PubMed

    Campbell, Joel F; Lin, Bing; Nehrir, Amin R; Harrison, F Wallace; Obland, Michael D

    2014-12-15

    An interpolation method is described for range measurements of high precision altimetry with repeating intensity modulated continuous wave (IM-CW) lidar waveforms using binary phase shift keying (BPSK), where the range profile is determined by means of a cross-correlation between the digital form of the transmitted signal and the digitized return signal collected by the lidar receiver. This method uses reordering of the array elements in the frequency domain to convert a repeating synthetic pulse signal to single highly interpolated pulse. This is then enhanced further using Richardson-Lucy deconvolution to greatly enhance the resolution of the pulse. We show the sampling resolution and pulse width can be enhanced by about two orders of magnitude using the signal processing algorithms presented, thus breaking the fundamental resolution limit for BPSK modulation of a particular bandwidth and bit rate. We demonstrate the usefulness of this technique for determining cloud and tree canopy thicknesses far beyond this fundamental limit in a lidar not designed for this purpose.

  4. Study of the performance of image restoration under different wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Wang, Xinqiu; Hu, Xinqi

    2016-10-01

    Image restoration is an effective way to improve the quality of images degraded by wave-front aberrations. If the wave-front aberration is too large, the performance of the image restoration will not be good. In this paper, the relationship between the performance of image restoration and the degree of wave-front aberrations is studied. A set of different wave-front aberrations is constructed by Zernike polynomials, and the corresponding PSF under white-light illumination is calculated. A set of blurred images is then obtained through convolution methods. Next we recover the images with the regularized Richardson-Lucy algorithm and use the RMS of the original image and the homologous deblurred image to evaluate the quality of restoration. Consequently, we determine the range of wave-front errors in which the recovered images are acceptable.

  5. Image restoration using aberration taken by a Hartmann wavefront sensor on extended object, towards real-time deconvolution

    NASA Astrophysics Data System (ADS)

    Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza

    2015-05-01

    In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.

  6. A localized Richardson-Lucy algorithm for fiber orientation estimation in high angular resolution diffusion imaging.

    PubMed

    Liu, Xiaozheng; Yuan, Zhenming; Guo, Zhongwei; Xu, Dongrong

    2015-05-01

    Diffusion tensor imaging is widely used for studying neural fiber trajectories in white matter and for quantifying changes in tissue using diffusion properties at each voxel in the brain. To better model the nature of crossing fibers within complex architectures, rather than using a simplified tensor model that assumes only a single fiber direction at each image voxel, a model mixing multiple diffusion tensors is used to profile diffusion signals from high angular resolution diffusion imaging (HARDI) data. Based on the HARDI signal and a multiple tensors model, spherical deconvolution methods have been developed to overcome the limitations of the diffusion tensor model when resolving crossing fibers. The Richardson-Lucy algorithm is a popular spherical deconvolution method used in previous work. However, it is based on a Gaussian distribution, while HARDI data are always very noisy, and the distribution of HARDI data follows a Rician distribution. This current work aims to present a novel solution to address these issues. By simultaneously considering both the Rician bias and neighbor correlation in HARDI data, the authors propose a localized Richardson-Lucy (LRL) algorithm to estimate fiber orientations for HARDI data. The proposed method can simultaneously reduce noise and correct the Rician bias. Mean angular error (MAE) between the estimated Fiber orientation distribution (FOD) field and the reference FOD field was computed to examine whether the proposed LRL algorithm offered any advantage over the conventional RL algorithm at various levels of noise. Normalized mean squared error (NMSE) was also computed to measure the similarity between the true FOD field and the estimated FOD filed. For MAE comparisons, the proposed LRL approach obtained the best results in most of the cases at different levels of SNR and b-values. For NMSE comparisons, the proposed LRL approach obtained the best results in most of the cases at b-value = 3000 s/mm(2), which is the recommended schema for HARDI data acquisition. In addition, the FOD fields estimated by the proposed LRL approach in regions of fiber crossing regions using real data sets also showed similar fiber structures which agreed with common acknowledge in these regions. The novel spherical deconvolution method for improved accuracy in investigating crossing fibers can simultaneously reduce noise and correct Rician bias. With the noise smoothed and bias corrected, this algorithm is especially suitable for estimation of fiber orientations in HARDI data. Experimental results using both synthetic and real imaging data demonstrated the success and effectiveness of the proposed LRL algorithm.

  7. A general Bayesian image reconstruction algorithm with entropy prior: Preliminary application to HST data

    NASA Astrophysics Data System (ADS)

    Nunez, Jorge; Llacer, Jorge

    1993-10-01

    This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.

  8. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  10. SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring

    NASA Astrophysics Data System (ADS)

    Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.

    2015-12-01

    We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.

  11. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  12. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  13. Fast restoration approach for motion blurred image based on deconvolution under the blurring paths

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Song, Jie; Hua, Xia

    2015-12-01

    For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.

  14. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  15. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  16. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  17. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  18. Dwell time method based on Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Ma, Zhen

    2017-10-01

    When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.

  19. Crowded field photometry with deconvolved images.

    NASA Astrophysics Data System (ADS)

    Linde, P.; Spännare, S.

    A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.

  20. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  1. Studies of EGRET sources with a novel image restoration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi

    2007-07-12

    We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less

  2. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  3. Resolution of Transverse Electron Beam Measurements using Optical Transition Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ischebeck, Rasmus; Decker, Franz-Josef; Hogan, Mark

    2005-06-22

    In the plasma wakefield acceleration experiment E-167, optical transition radiation is used to measure the transverse profile of the electron bunches before and after the plasma acceleration. The distribution of the electric field from a single electron does not give a point-like distribution on the detector, but has a certain extension. Additionally, the resolution of the imaging system is affected by aberrations. The transverse profile of the bunch is thus convolved with a point spread function (PSF). Algorithms that deconvolve the image can help to improve the resolution. Imaged test patterns are used to determine the modulation transfer function ofmore » the lens. From this, the PSF can be reconstructed. The Lucy-Richardson algorithm is used to deconvolute this PSF from test images.« less

  4. Kinematic model for the space-variant image motion of star sensors under dynamical conditions

    NASA Astrophysics Data System (ADS)

    Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun

    2015-06-01

    A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.

  5. LCD motion blur reduction: a signal processing approach.

    PubMed

    Har-Noy, Shay; Nguyen, Truong Q

    2008-02-01

    Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.

  6. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  7. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  8. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  9. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  10. Reducing the spatial resolution range of neutron radiographs cast by thick objects

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2017-11-01

    The quality of a neutron radiograph is strongly dependent upon the features of the acquisition system. Most of them, such as detector resolution, electronic noise and statistical fluctuation can hardly be improved. Yet, a main parameter ruling the image spatial resolution, namely the L/D ratio of the system can be increased simply by lengthening the source-detector clearance. Such an option eventually may not be feasible due to neutron flux decreasing or engineering constraints. Under this circumstance, a radiograph improvement is only possible by some kind of after-acquisition procedure capable to retrieve, at least partially, the information concealed by the degradation process. Since the spoiling agent tied to the L/D has a systematic character, its impact can be reduced by an unfolding procedure such as Richardson-Lucy algorithm. However, that agent should be fully characterized and furnished to the algorithm as a Point Spread Function - PSF unfolding function. A main drawback of unfolding algorithms like Richardson-Lucy is that the PSF should be fixed, i.e., it assumes a certain constant image spatial resolution, rather than a variable one as actually occurs for thick objects. This work presents a methodology to minimize this difficulty by making all planes of the inspected object to cast a resolution within the shorter gap comprised between the object central plane and the detector. The image can then be unfolded with a lower resolution within a tighter range, yielding a better quality. The process is performed with two radiographs, where one of them is acquired with the object turned by 180° on its vertical axis with regard to the other. After a mirroring of one of them about its vertical axis, the images are added. As the resolution increases linearly with the object-detector gap, it would remain always lower than that of the central one. Therefore, the overall resolution of the composite radiograph is enhanced. A further improvement can then be achieved through an efficient unfolding since the object has been virtually shrunk along the neutron path.

  11. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  12. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  13. Application of the Lucy–Richardson Deconvolution Procedure to High Resolution Photoemission Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rameau, J.; Yang, H.-B.; Johnson, P.D.

    2010-07-01

    Angle-resolved photoemission has developed into one of the leading probes of the electronic structure and associated dynamics of condensed matter systems. As with any experimental technique the ability to resolve features in the spectra is ultimately limited by the resolution of the instrumentation used in the measurement. Previously developed for sharpening astronomical images, the Lucy-Richardson deconvolution technique proves to be a useful tool for improving the photoemission spectra obtained in modern hemispherical electron spectrometers where the photoelectron spectrum is displayed as a 2D image in energy and momentum space.

  14. Scanning two-photon microscopy with upconverting lanthanide nanoparticles via Richardson-Lucy deconvolution.

    PubMed

    Gainer, Christian F; Utzinger, Urs; Romanowski, Marek

    2012-07-01

    The use of upconverting lanthanide nanoparticles in fast-scanning microscopy is hindered by a long luminescence decay time, which greatly blurs images acquired in a nondescanned mode. We demonstrate herein an image processing method based on Richardson-Lucy deconvolution that mitigates the detrimental effects of their luminescence lifetime. This technique generates images with lateral resolution on par with the system's performance, ∼1.2  μm, while maintaining an axial resolution of 5 μm or better at a scan rate comparable with traditional two-photon microscopy. Remarkably, this can be accomplished with near infrared excitation power densities of 850 W/cm(2), several orders of magnitude below those used in two-photon imaging with molecular fluorophores. By way of illustration, we introduce the use of lipids to coat and functionalize these nanoparticles, rendering them water dispersible and readily conjugated to biologically relevant ligands, in this case epidermal growth factor receptor antibody. This deconvolution technique combined with the functionalized nanoparticles will enable three-dimensional functional tissue imaging at exceptionally low excitation power densities.

  15. HST image restoration: A comparison of pre- and post-servicing mission results

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Mo, J.

    1992-01-01

    A variety of image restoration techniques (e.g., Wiener filter, Lucy-Richardson, MEM) have been applied quite successfully to the aberrated HST images. The HST servicing mission (scheduled for late 1993 or early 1994) will install a corrective optics system (COSTAR) for the Faint Object Camera and spectrographs and replace the Wide Field/Planetary Camera with a second generation instrument (WF/PC-II) having its own corrective elements. The image quality is expected to be improved substantially with these new instruments. What then is the role of image restoration for the HST in the long term? Through a series of numerical experiments using model point-spread functions for both aberrated and unaberrated optics, we find that substantial improvements in image resolution can be obtained for post-servicing mission data using the same or similar algorithms as being employed now to correct aberrated images. Included in our investigations are studies of the photometric integrity of the restoration algorithms and explicit models for HST pointing errors (spacecraft jitter).

  16. On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center

    NASA Technical Reports Server (NTRS)

    Sturner, Steven J.

    2000-01-01

    We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.

  17. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  18. Reconstruction of the two-dimensional gravitational potential of galaxy clusters from X-ray and Sunyaev-Zel'dovich measurements

    NASA Astrophysics Data System (ADS)

    Tchernin, C.; Bartelmann, M.; Huber, K.; Dekel, A.; Hurier, G.; Majer, C. L.; Meyer, S.; Zinger, E.; Eckert, D.; Meneghetti, M.; Merten, J.

    2018-06-01

    Context. The mass of galaxy clusters is not a direct observable, nonetheless it is commonly used to probe cosmological models. Based on the combination of all main cluster observables, that is, the X-ray emission, the thermal Sunyaev-Zel'dovich (SZ) signal, the velocity dispersion of the cluster galaxies, and gravitational lensing, the gravitational potential of galaxy clusters can be jointly reconstructed. Aims: We derive the two main ingredients required for this joint reconstruction: the potentials individually reconstructed from the observables and their covariance matrices, which act as a weight in the joint reconstruction. We show here the method to derive these quantities. The result of the joint reconstruction applied to a real cluster will be discussed in a forthcoming paper. Methods: We apply the Richardson-Lucy deprojection algorithm to data on a two-dimensional (2D) grid. We first test the 2D deprojection algorithm on a β-profile. Assuming hydrostatic equilibrium, we further reconstruct the gravitational potential of a simulated galaxy cluster based on synthetic SZ and X-ray data. We then reconstruct the projected gravitational potential of the massive and dynamically active cluster Abell 2142, based on the X-ray observations collected with XMM-Newton and the SZ observations from the Planck satellite. Finally, we compute the covariance matrix of the projected reconstructed potential of the cluster Abell 2142 based on the X-ray measurements collected with XMM-Newton. Results: The gravitational potentials of the simulated cluster recovered from synthetic X-ray and SZ data are consistent, even though the potential reconstructed from X-rays shows larger deviations from the true potential. Regarding Abell 2142, the projected gravitational cluster potentials recovered from SZ and X-ray data reproduce well the projected potential inferred from gravitational-lensing observations. We also observe that the covariance matrix of the potential for Abell 2142 reconstructed from XMM-Newton data sensitively depends on the resolution of the deprojected grid and on the smoothing scale used in the deprojection. Conclusions: We show that the Richardson-Lucy deprojection method can be effectively applied on a grid and that the projected potential is well recovered from real and simulated data based on X-ray and SZ signal. The comparison between the reconstructed potentials from the different observables provides additional information on the validity of the assumptions as function of the projected radius.

  19. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  20. Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen

    2017-09-01

    Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.

  1. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  2. End-to-end performance analysis using engineering confidence models and a ground processor prototype

    NASA Astrophysics Data System (ADS)

    Kruse, Klaus-Werner; Sauer, Maximilian; Jäger, Thomas; Herzog, Alexandra; Schmitt, Michael; Huchler, Markus; Wallace, Kotska; Eisinger, Michael; Heliere, Arnaud; Lefebvre, Alain; Maher, Mat; Chang, Mark; Phillips, Tracy; Knight, Steve; de Goeij, Bryan T. G.; van der Knaap, Frits; Van't Hof, Adriaan

    2015-10-01

    The European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) are co-operating to develop the EarthCARE satellite mission with the fundamental objective of improving the understanding of the processes involving clouds, aerosols and radiation in the Earth's atmosphere. The EarthCARE Multispectral Imager (MSI) is relatively compact for a space borne imager. As a consequence, the immediate point-spread function (PSF) of the instrument will be mainly determined by the diffraction caused by the relatively small optical aperture. In order to still achieve a high contrast image, de-convolution processing is applied to remove the impact of diffraction on the PSF. A Lucy-Richardson algorithm has been chosen for this purpose. This paper will describe the system setup and the necessary data pre-processing and post-processing steps applied in order to compare the end-to-end image quality with the L1b performance required by the science community.

  3. Deconvolving instrumental and intrinsic broadening in core-shell x-ray spectroscopies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fister, T. T.; Seidler, G. T.; Rehr, J. J.

    2007-05-01

    Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in core-shell spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of {approx}9.3 eV full width at half maximum broadeningmore » from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm in these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime.« less

  4. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  5. Ionospheric-thermospheric UV tomography: 3. A multisensor technique for creating full-orbit reconstructions of atmospheric UV emission

    NASA Astrophysics Data System (ADS)

    Hei, Matthew A.; Budzien, Scott A.; Dymond, Kenneth F.; Nicholas, Andrew C.; Paxton, Larry J.; Schaefer, Robert K.; Groves, Keith M.

    2017-07-01

    We present the Volume Emission Rate Tomography (VERT) technique for inverting satellite-based, multisensor limb and nadir measurements of atmospheric ultraviolet emission to create whole-orbit reconstructions of atmospheric volume emission rate. The VERT approach is more general than previous ionospheric tomography methods because it can reconstruct the volume emission rate field irrespective of the particular excitation mechanisms (e.g., radiative recombination, photoelectron impact excitation, and energetic particle precipitation in auroras); physical models are then applied to interpret the airglow. The technique was developed and tested using data from the Special Sensor Ultraviolet Limb Imager and Special Sensor Ultraviolet Spectrographic Imager instruments aboard the Defense Meteorological Satellite Program F-18 spacecraft and planned for use with upcoming remote sensing missions. The technique incorporates several features to optimize the tomographic solutions, such as the use of a nonnegative algorithm (Richardson-Lucy, RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, capability to include extinction effects due to resonant scattering and absorption of the photons from the lines of sight, a pseudodiffusion-based regularization scheme implemented between iterations of the RL code to produce smoother solutions, and the capability to estimate error bars on the solutions. Tests using simulated atmospheric emissions verify that the technique performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. Lastly, we consider ionospheric nightglow and validate reconstructions of the nighttime electron density against Advanced Research Project Agency (ARPA) Long-range Tracking and Identification Radar (ALTAIR) incoherent scatter radar data.

  6. PSF reconstruction for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming

    2018-02-01

    Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.

  7. The interaction of the outflow with the molecular disk in the Active Galactic Nucleus of NGC 6951

    NASA Astrophysics Data System (ADS)

    May, D.; Steiner, J. E.; Ricci, T. V.; Menezes, R. B.; Andrade, I. S.

    2015-02-01

    Context: we present a study of the central 200 pc of NGC 6951, in the optical and NIR, taken with the Gemini North Telescope integral field spectrographs, with resolution of ~ 0''.1 Methods: we used a set of image processing techniques, as the filtering of high spatial and spectral frequencies, Richardson-Lucy deconvolution and PCA Tomography (Steiner et al. 2009) to map the distribution and kinematics of the emission lines. Results: we found a thick molecular disk, with the ionization cone highly misaligned.

  8. Deconvolution enhanced direction of arrival estimation using one- and three-component seismic arrays applied to ocean induced microseisms

    NASA Astrophysics Data System (ADS)

    Gal, M.; Reading, A. M.; Ellingsen, S. P.; Koper, K. D.; Burlacu, R.; Gibbons, S. J.

    2016-07-01

    Microseisms in the period of 2-10 s are generated in deep oceans and near coastal regions. It is common for microseisms from multiple sources to arrive at the same time at a given seismometer. It is therefore desirable to be able to measure multiple slowness vectors accurately. Popular ways to estimate the direction of arrival of ocean induced microseisms are the conventional (fk) or adaptive (Capon) beamformer. These techniques give robust estimates, but are limited in their resolution capabilities and hence do not always detect all arrivals. One of the limiting factors in determining direction of arrival with seismic arrays is the array response, which can strongly influence the estimation of weaker sources. In this work, we aim to improve the resolution for weaker sources and evaluate the performance of two deconvolution algorithms, Richardson-Lucy deconvolution and a new implementation of CLEAN-PSF. The algorithms are tested with three arrays of different aperture (ASAR, WRA and NORSAR) using 1 month of real data each and compared with the conventional approaches. We find an improvement over conventional methods from both algorithms and the best performance with CLEAN-PSF. We then extend the CLEAN-PSF framework to three components (3C) and evaluate 1 yr of data from the Pilbara Seismic Array in northwest Australia. The 3C CLEAN-PSF analysis is capable in resolving a previously undetected Sn phase.

  9. A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry

    NASA Astrophysics Data System (ADS)

    Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli

    2015-03-01

    Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (<12 m) bathymetry. However, one disadvantage of such systems is the lack of near-infrared and Raman channels, which results in difficulties in extracting the water surface. Therefore, the choice of a suitable waveform processing method is extremely important to guarantee the accuracy of the bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.

  10. The projected gravitational potential of the galaxy cluster MACS J1206 derived from galaxy kinematics

    NASA Astrophysics Data System (ADS)

    Stock, Dennis; Meyer, Sven; Sarli, Eleonora; Bartelmann, Matthias; Balestra, Italo; Grillo, Claudio; Koekemoer, Anton; Mercurio, Amata; Nonino, Mario; Rosati, Piero

    2015-12-01

    We reconstruct the radial profile of the projected gravitational potential of the galaxy cluster MACS J1206 from 592 spectroscopic measurements of velocities of cluster members. To accomplish this, we use a method we have developed recently based on the Richardson-Lucy deprojection algorithm and an inversion of the spherically-symmetric Jeans equation. We find that, within the uncertainties, our reconstruction agrees very well with a potential reconstruction from weak and strong gravitational lensing as well as with a potential obtained from X-ray measurements. In addition, our reconstruction is in good agreement with several common analytic profiles of the lensing potential. Varying the anisotropy parameter in the Jeans equation, we find that isotropy parameters, which are either small, β ≲ 0.2, or decrease with radius, yield potential profiles that strongly disagree with that obtained from gravitational lensing. We achieve the best agreement between our potential profile and the profile from gravitational lensing if the anisotropy parameter rises steeply to β ≈ 0.6 within ≈ 0.5 Mpc and stays constant further out.

  11. Spatial studies of planetary nebulae with IRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawkins, G.W.; Zuckerman, B.

    1991-06-01

    The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less

  12. Reconstruction of the primordial power spectrum using temperature and polarisation data from multiple experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Gavin; Contaldi, Carlo R., E-mail: gavin.nicholson05@imperial.ac.uk, E-mail: c.contaldi@imperial.ac.uk

    2009-07-01

    We develop a method to reconstruct the primordial power spectrum, P(k), using both temperature and polarisation data from the joint analysis of a number of Cosmic Microwave Background (CMB) observations. The method is an extension of the Richardson-Lucy algorithm, first applied in this context by Shafieloo and Souradeep [1]. We show how the inclusion of polarisation measurements can decrease the uncertainty in the reconstructed power spectrum. In particular, the polarisation data can constrain oscillations in the spectrum more effectively than total intensity only measurements. We apply the estimator to a compilation of current CMB results. The reconstructed spectrum is consistentmore » with the best-fit power spectrum although we find evidence for a 'dip' in the power on scales k ≈ 0.002 Mpc{sup −1}. This feature appears to be associated with the WMAP power in the region 18 ≤ l ≤ 26 which is consistently below best-fit models. We also forecast the reconstruction for a simulated, Planck-like [2] survey including sample variance limited polarisation data.« less

  13. A spatially-variant deconvolution method based on total variation for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.

  14. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  15. Combining high fidelity simulations and real data for improved small-footprint waveform lidar assessment of vegetation structure (Invited)

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; Wu, J.; Asner, G. P.

    2010-12-01

    Our understanding of vegetation complexity and biodiversity, from a remote sensing perspective, has evolved from 2D species diversity to also include 3D vegetation structural diversity. Attempts at using image-based approaches for structural assessment have met with reasonable success, but 3D remote sensing technologies, such as radar and light detection and ranging (lidar), are arguably more adept at sensing vegetation structure. While radar-derived structure metrics tend to break down at high biomass levels, novel waveform lidar systems present us with new opportunities for detailed and scalable structural characterization of vegetation. These sensors digitize the entire backscattered energy profile at high spatial and vertical resolutions and often at off-nadir angles. Research teams at Rochester Institute of Technology (RIT) and Carnegie Institution for Science have been using airborne data from the Carnegie Airborne Observatory (CAO) to assess vegetation structure and variation in savanna ecosystems in and around the Kruger National Park, South Africa. It quickly became evident that (i) pre-processing of small-footprint waveform data is a critical step prior to testing scientific hypotheses, (ii) a number of assumptions of how vegetation structure is expressed in these 3D signals need to be evaluated, and very importantly (iii) we need to re-evaluate our linkages between coarse in-field measurements, e.g., volume, biomass, leaf area index (LAI), and metrics derived from waveform lidar. Research has progressed to the stage where we have evaluated various pre-processing steps, e.g., convolution via the Wiener filter, Richardson-Lucy, and non-negative least squares algorithms, and the coupling of waveform voxels to tree structure in a simulation environment. This was done in the MODTRAN-based Digital Imaging and Remote Sensing Image Generation (DIRSIG) simulation environment, developed at RIT. We generated "truth" cross-section datasets of detailed virtual trees in this environment and evaluated inversion approaches to tree structure estimation. Various outgoing pulse widths, tree structures, and a noise component were included as part of the simulation effort. Results, for example, have shown that the Richardson-Lucy algorithm outperforms other approaches in terms of retrieval of known structural information, that our assumption regarding the position of the ground surface needs re-evaluation, and has shed light on herbaceous biomass and waveform interactions and the impact of outgoing pulse width on assessments. These efforts have gone a long way in providing a solid foundation for analysis and interpretation of actual waveform data from the savanna study area. We expect that newfound knowledge with respect to waveform-target interactions from these simulations will also aid efforts to reconstruct 3D trees from real data and better describe associated structural diversity. Results will be presented at the conference.

  16. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  17. Image enhancement in positron emission mammography

    NASA Astrophysics Data System (ADS)

    Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.

    2017-02-01

    Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).

  18. Critical study of the distribution of rotational velocities of Be stars. I. Deconvolution methods, effects due to gravity darkening, macroturbulence, and binarity

    NASA Astrophysics Data System (ADS)

    Zorec, J.; Frémat, Y.; Domiciano de Souza, A.; Royer, F.; Cidale, L.; Hubert, A.-M.; Semaan, T.; Martayan, C.; Cochetti, Y. R.; Arias, M. L.; Aidelman, Y.; Stee, P.

    2016-11-01

    Context. Among intermediate-mass and massive stars, Be stars are the fastest rotators in the main sequence (MS) and, as such, these stars are a cornerstone to validate models of structure and evolution of rotating stars. Several phenomena, however, induce under- or overestimations either of their apparent Vsini, or true velocity V. Aims: In the present contribution we aim at obtaining distributions of true rotational velocities corrected for systematic effects induced by the rapid rotation itself, macroturbulent velocities, and binarity. Methods: We study a set of 233 Be stars by assuming they have inclination angles distributed at random. We critically discuss the methods of Cranmer and Lucy-Richardson, which enable us to transform a distribution of projected velocities into another distribution of true rotational velocities, where the gravitational darkening effect on the Vsini parameter is considered in different ways. We conclude that iterative algorithm by Lucy-Richardson responds at best to the purposes of the present work, but it requires a thorough determination of the stellar fundamental parameters. Results: We conclude that once the mode of ratios of the true velocities of Be stars attains the value V/Vc ≃ 0.77 in the main-sequence (MS) evolutionary phase, it remains unchanged up to the end of the MS lifespan. The statistical corrections found on the distribution of ratios V/Vc for overestimations of Vsini, due to macroturbulent motions and binarity, produce a shift of this distribution toward lower values of V/Vc when Be stars in all MS evolutionary stages are considered together. The mode of the final distribution obtained is at V/Vc ≃ 0.65. This distribution has a nearly symmetric distribution and shows that the Be phenomenon is characterized by a wide range of true velocity ratios 0.3 ≲ V/Vc ≲ 0.95. It thus suggests that the probability that Be stars are critical rotators is extremely low. Conclusions: The corrections attempted in the present work represent an initial step to infer indications about the nature of the Be-star surface rotation that will be studied in the second paper of this series. Full Tables 1 and 4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A132

  19. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    PubMed

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  20. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  1. Reconstruction of the mass distribution of galaxy clusters from the inversion of the thermal Sunyaev-Zel'dovich effect

    NASA Astrophysics Data System (ADS)

    Majer, C. L.; Meyer, S.; Konrad, S.; Sarli, E.; Bartelmann, M.

    2016-07-01

    This paper continues a series in which we intend to show how all observables of galaxy clusters can be combined to recover the two-dimensional, projected gravitational potential of individual clusters. Our goal is to develop a non-parametric algorithm for joint cluster reconstruction taking all cluster observables into account. For this reason we focus on the line-of-sight projected gravitational potential, proportional to the lensing potential, in order to extend existing reconstruction algorithms. In this paper, we begin with the relation between the Compton-y parameter and the Newtonian gravitational potential, assuming hydrostatic equilibrium and a polytropic stratification of the intracluster gas. Extending our first publication we now consider a spheroidal rather than a spherical cluster symmetry. We show how a Richardson-Lucy deconvolution can be used to convert the intensity change of the CMB due to the thermal Sunyaev-Zel'dovich effect into an estimate for the two-dimensional gravitational potential. We apply our reconstruction method to a cluster based on an N-body/hydrodynamical simulation processed with the characteristics (resolution and noise) of the ALMA interferometer for which we achieve a relative error of ≲20 per cent for a large fraction of the virial radius. We further apply our method to an observation of the galaxy cluster RXJ1347 for which we can reconstruct the potential with a relative error of ≲20 per cent for the observable cluster range.

  2. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  3. Adaption of the LUCI framework to account for detailed farm management: a case study exploring potential for achieving locally and nationally significant greenhouse gas, flooding and nutrient mitigation without compromising livelihoods on New Zealand farm

    NASA Astrophysics Data System (ADS)

    Jackson, Bethanna; Trodahl, Martha; Maxwell, Deborah; Easton, Stuart

    2016-04-01

    This talk discusses recent progress in adapting the Land Utilisation and Capability Indicator (LUCI) framework to take account of the impact of detailed farm management on greenhouse gas emissions and on water, sediment and nutrient delivery to waterways. LUCI is a land management decision support framework which examines the impact of current and potential interventions on a variety of outcomes, including flood mitigation, water supply, greenhouse gas emissions, biodiversity, erosion, sediment and nutrient delivery to waterways, and agricultural production. The potential of the landscape to provide benefits is a function of both the biophysical properties of individual landscape elements and their configuration. Both are respected in LUCI where possible. For example, the hydrology, sediment and chemical routing algorithms are based on physical principles of hillslope flow, taking information on the storage and permeability capacity of elements within the landscape from soil and land use data and honoring physical thresholds, mass and energy balance constraints. LUCI discretizes hydrological response units within the landscape according to similarity of their hydraulic properties and preserves spatially explicit topographical routing. Implications of keeping the "status quo" or potential scenarios of land management change can then be evaluated under different meteorological or climatic events (e.g. flood return periods, rainfall events, droughts), cascading water through the hydrological response units using a "fill and spill" approach. These and other component algorithms are designed to be fast-running while maintaining physical consistency and fine spatial detail. This allows it to operate from subfield level scale to catchment, or even national scale, simultaneously. It analyses and communicates the spatial pattern of individual provision and tradeoffs/synergies between desired outcomes at detailed resolutions and provides suggestions on where management change could be most efficiently targeted to meet water quality targets while maintaining production. Historically, LUCI has inferred land management from nationally available land cover categorisations, so lacked the capacity to discriminate between differences in more detailed management (tillage information, type of irrigation system, stocking numbers and type, etc). However, recently a collaboration with a farmer cooperative has commenced. LUCI is being further developed to take in a range of more detailed management information, which can be entered directly to LUCI or easily integrated via existing farm management files. Example output using a variety of management scenarios and ongoing "validation" of LUCI's performance at the farm scale will be presented using New Zealand crop, beef and dairy farms as case studies.

  4. Intelligent estimation of noise and blur variances using ANN for the restoration of ultrasound images.

    PubMed

    Uddin, Muhammad Shahin; Halder, Kalyan Kumar; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-11-01

    Ultrasound (US) imaging is a widely used clinical diagnostic tool in medical imaging techniques. It is a comparatively safe, economical, painless, portable, and noninvasive real-time tool compared to the other imaging modalities. However, the image quality of US imaging is severely affected by the presence of speckle noise and blur during the acquisition process. In order to ensure a high-quality clinical diagnosis, US images must be restored by reducing their speckle noise and blur. In general, speckle noise is modeled as a multiplicative noise following a Rayleigh distribution and blur as a Gaussian function. Hereto, we propose an intelligent estimator based on artificial neural networks (ANNs) to estimate the variances of noise and blur, which, in turn, are used to obtain an image without discernible distortions. A set of statistical features computed from the image and its complex wavelet sub-bands are used as input to the ANN. In the proposed method, we solve the inverse Rayleigh function numerically for speckle reduction and use the Richardson-Lucy algorithm for de-blurring. The performance of this method is compared with that of the traditional methods by applying them to a synthetic, physical phantom and clinical data, which confirms better restoration results by the proposed method.

  5. Features in the primordial spectrum from WMAP: A wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun; Manimaran, P.

    2007-06-15

    Precise measurements of the anisotropies in the cosmic microwave background enable us to do an accurate study on the form of the primordial power spectrum for a given set of cosmological parameters. In a previous paper [A. Shafieloo and T. Souradeep, Phys. Rev. D 70, 043523 (2004).], we implemented an improved (error sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the first year of WMAP data to determine the primordial power spectrum assuming a concordance cosmological model. This recovered spectrum has a likelihood far better than a scale invariant, or, 'best fit' scale free spectra ({delta}lnL{approx_equal}25 withmore » respect to the Harrison-Zeldovich spectrum, and, {delta}lnL{approx_equal}11 with respect to the power law spectrum with n{sub s}=0.95). In this paper we use the discrete wavelet transform (DWT) to decompose the local features of the recovered spectrum individually to study their effect and significance on the recovered angular power spectrum and hence the likelihood. We show that besides the infrared cutoff at the horizon scale, the associated features of the primordial power spectrum around the horizon have a significant effect on improving the likelihood. The strong features are localized at the horizon scale.« less

  6. Restoration of Motion-Blurred Image Based on Border Deformation Detection: A Traffic Sign Restoration Model

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350

  7. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  8. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  9. Characterization of turbidity in Florida's Lake Okeechobee and Caloosahatchee and St. Lucie estuaries using MODIS-Aqua measurements.

    PubMed

    Wang, Menghua; Nim, Carl J; Son, Seunghyun; Shi, Wei

    2012-10-15

    This paper describes the use of ocean color remote sensing data from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Aqua satellite to characterize turbidity in Lake Okeechobee and its primary drainage basins, the Caloosahatchee and St. Lucie estuaries from 2002 to 2010. Drainage modification and agricultural development in southern Florida transport sediments and nutrients from watershed agricultural areas to Lake Okeechobee. As a result of development around Lake Okeechobee and the estuaries that are connected to Lake Okeechobee, estuarine conditions have also been adversely impacted, resulting in salinity and nutrient fluctuations. The measurement of water turbidity in lacustrine and estuarine ecosystems allows researchers to understand important factors such as light limitation and the potential release of nutrients from re-suspended sediments. Based on a strong correlation between water turbidity and normalized water-leaving radiance at the near-infrared (NIR) band (nL(w)(869)), a new satellite water turbidity algorithm has been developed for Lake Okeechobee. This study has shown important applications with satellite-measured nL(w)(869) data for water quality monitoring and measurements for turbid inland lakes. MODIS-Aqua-measured water property data are derived using the shortwave infrared (SWIR)-based atmospheric correction algorithm in order to remotely obtain synoptic turbidity data in Lake Okeechobee and normalized water-leaving radiance using the red band (nL(w)(645)) in the Caloosahatchee and St. Lucie estuaries. We found varied, but distinct seasonal, spatial, and event driven turbidity trends in Lake Okeechobee and the Caloosahatchee and St. Lucie estuary regions. Wind waves and hurricanes have the largest influence on turbidity trends in Lake Okeechobee, while tides, currents, wind waves, and hurricanes influence the Caloosahatchee and St. Lucie estuarine areas. Published by Elsevier Ltd.

  10. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  11. The Effect of Special Reduction Procedures of IFU Observations from Gemini-NIFS on Dynamical Measurements of Nearby AGN

    NASA Astrophysics Data System (ADS)

    Pope, Crystal L.; Crenshaw, D. Michael; Fischer, Travis C.

    2016-01-01

    We present a preliminary analysis of the inflows and outflows in the narrow-line regions of nearby (z<0.1) AGN using observations from the Gemini-North telescope's Near-Infared Integral Field Spectrograph (NIFS). In addition to the standard reduction procedure for NIFS data cubes, these observations were treated for multiple sources of noise and artifacts from the adaptive optics observations and the NIFS instrument. This procedure included the following steps: correction of the differential atmospheric refraction, spatial resampling, low-pass Butterworth spatial filtering, removal of the "instrumental fingerprint", and the Richardson-Lucy deconvolution. We compare measurements from NIFS data cubes with and without the additional correction procedures to determine the effect of this data treatment on our scientific results.

  12. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  13. LuciPHOr: Algorithm for Phosphorylation Site Localization with False Localization Rate Estimation Using Modified Target-Decoy Approach*

    PubMed Central

    Fermin, Damian; Walmsley, Scott J.; Gingras, Anne-Claude; Choi, Hyungwon; Nesvizhskii, Alexey I.

    2013-01-01

    The localization of phosphorylation sites in peptide sequences is a challenging problem in large-scale phosphoproteomics analysis. The intense neutral loss peaks and the coexistence of multiple serine/threonine and/or tyrosine residues are limiting factors for objectively scoring site patterns across thousands of peptides. Various computational approaches for phosphorylation site localization have been proposed, including Ascore, Mascot Delta score, and ProteinProspector, yet few address direct estimation of the false localization rate (FLR) in each experiment. Here we propose LuciPHOr, a modified target-decoy-based approach that uses mass accuracy and peak intensities for site localization scoring and FLR estimation. Accurate estimation of the FLR is a difficult task at the individual-site level because the degree of uncertainty in localization varies significantly across different peptides. LuciPHOr carries out simultaneous localization on all candidate sites in each peptide and estimates the FLR based on the target-decoy framework, where decoy phosphopeptides generated by placing artificial phosphorylation(s) on non-candidate residues compete with the non-decoy phosphopeptides. LuciPHOr also reports approximate site-level confidence scores for all candidate sites as a means to localize additional sites from multiphosphorylated peptides in which localization can be partially achieved. Unlike the existing tools, LuciPHOr is compatible with any search engine output processed through the Trans-Proteomic Pipeline. We evaluated the performance of LuciPHOr in terms of the sensitivity and accuracy of FLR estimates using two synthetic phosphopeptide libraries and a phosphoproteomic dataset generated from complex mouse brain samples. PMID:23918812

  14. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  15. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  16. Yet one more dwell time algorithm

    NASA Astrophysics Data System (ADS)

    Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.

  17. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  18. Along-track calibration of SWIR push-broom hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Jemec, Jurij; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran

    2016-05-01

    Push-broom hyperspectral imaging systems are increasingly used for various medical, agricultural and military purposes. The acquired images contain spectral information in every pixel of the imaged scene collecting additional information about the imaged scene compared to the classical RGB color imaging. Due to the misalignment and imperfections in the optical components comprising the push-broom hyperspectral imaging system, variable spectral and spatial misalignments and blur are present in the acquired images. To capture these distortions, a spatially and spectrally variant response function must be identified at each spatial and spectral position. In this study, we propose a procedure to characterize the variant response function of Short-Wavelength Infrared (SWIR) push-broom hyperspectral imaging systems in the across-track and along-track direction and remove its effect from the acquired images. A custom laser-machined spatial calibration targets are used for the characterization. The spatial and spectral variability of the response function in the across-track and along-track direction is modeled by a parametrized basis function. Finally, the characterization results are used to restore the distorted hyperspectral images in the across-track and along-track direction by a Richardson-Lucy deconvolution-based algorithm. The proposed calibration method in the across-track and along-track direction is thoroughly evaluated on images of targets with well-defined geometric properties. The results suggest that the proposed procedure is well suited for fast and accurate spatial calibration of push-broom hyperspectral imaging systems.

  19. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  20. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  1. Directional MTF measurement using sphere phantoms for a digital breast tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Lee, Changwoo; Baek, Jongduk

    2015-03-01

    The digital breast tomosynthesis (DBT) has been widely used as a diagnosis imaging modality of breast cancer because of potential for structure noise reduction, better detectability, and less breast compression. Since 3D modulation transfer function (MTF) is one of the quantitative metrics to assess the spatial resolution of medical imaging systems, it is very important to measure 3D MTF of the DBT system to evaluate the resolution performance. In order to do that, Samei et al. used sphere phantoms and applied Thornton's method to the DBT system. However, due to the limitation of Thornton's method, the low frequency drop, caused by the limited data acquisition angle and reconstruction filters, was not measured correctly. To overcome this limitation, we propose a Richardson-Lucy (RL) deconvolution based estimation method to measure the directional MTF. We reconstructed point and sphere objects using FDK algorithm within a 40⁰ data acquisition angle. The ideal 3D MTF is obtained by taking Fourier transform of the reconstructed point object, and three directions (i.e., fx-direction, fy-direction, and fxy-direction) of the ideal 3D MTF are used as a reference. To estimate the directional MTF, the plane integrals of the reconstructed and ideal sphere object were calculated and used to estimate the directional PSF using RL deconvolution technique. Finally, the directional MTF was calculated by taking Fourier transform of the estimated PSF. Compared to the previous method, the proposed method showed a good agreement with the ideal directional MTF, especially at low frequency regions.

  2. Lock No. 1 St. Lucie Canal. Upper gate structure, masonry ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Lock No. 1- St. Lucie Canal. Upper gate structure, masonry plan- masonry elevations. - St. Lucie Canal, St. Lucie Lock No. 1, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  3. Primordial power spectrum: a complete analysis with the WMAP nine-year data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in

    2013-07-01

    We have improved further the error sensitive Richardson-Lucy deconvolution algorithm making it applicable directly on the un-binned measured angular power spectrum of Cosmic Microwave Background observations to reconstruct the form of the primordial power spectrum. This improvement makes the application of the method significantly more straight forward by removing some intermediate stages of analysis allowing a reconstruction of the primordial spectrum with higher efficiency and precision and with lower computational expenses. Applying the modified algorithm we fit the WMAP 9 year data using the optimized reconstructed form of the primordial spectrum with more than 300 improvement in χ{sup 2}{sub eff}more » with respect to the best fit power-law. This is clearly beyond the reach of other alternative approaches and reflects the efficiency of the proposed method in the reconstruction process and allow us to look for any possible feature in the primordial spectrum projected in the CMB data. Though the proposed method allow us to look at various possibilities for the form of the primordial spectrum, all having good fit to the data, proper error-analysis is needed to test for consistency of theoretical models since, along with possible physical artefacts, most of the features in the reconstructed spectrum might be arising from fitting noises in the CMB data. Reconstructed error-band for the form of the primordial spectrum using many realizations of the data, all bootstrapped and based on WMAP 9 year data, shows proper consistency of power-law form of the primordial spectrum with the WMAP 9 data at all wave numbers. Including WMAP polarization data in to the analysis have not improved much our results due to its low quality but we expect Planck data will allow us to make a full analysis on CMB observations on both temperature and polarization separately and in combination.« less

  4. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  5. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  6. Prognostic Value of [18F]-Fluoromethylcholine Positron Emission Tomography/Computed Tomography Before Stereotactic Body Radiation Therapy for Oligometastatic Prostate Cancer.

    PubMed

    Cysouw, Matthijs; Bouman-Wammes, Esther; Hoekstra, Otto; van den Eertwegh, Alfons; Piet, Maartje; van Moorselaar, Jeroen; Boellaard, Ronald; Dahele, Max; Oprea-Lager, Daniela

    2018-06-01

    To investigate the predictive value of [ 18 F]-fluoromethylcholine positron emission tomography/computed tomography (PET/CT)-derived parameters on progression-free survival (PFS) in oligometastatic prostate cancer patients treated with stereotactic body radiation therapy (SBRT). In [ 18 F]-fluoromethylcholine PET/CT scans of 40 consecutive patients with ≤4 metachronous metastases treated with SBRT we retrospectively measured the number of metastases, standardized uptake values (SUV mean , SUV max , SUV peak ), metabolically active tumor volume (MATV), and total lesion choline uptake. Partial-volume correction was applied using the iterative deconvolution Lucy-Richardson algorithm. Thirty-seven lymph node and 13 bone metastases were treated with SBRT. Thirty-three patients (82.5%) had 1 lesion, 4 (10%) had 2 lesions, and 3 (7.5%) had 3 lesions. After a median follow-up of 32.6 months (interquartile range, 35.5 months), the median PFS was 11.5 months (95% confidence interval 8.4-14.6 months). Having more than a single metastasis was a significant prognostic factor (hazard ratio 2.74; P = .03), and there was a trend in risk of progression for large MATV (hazard ratio 1.86; P = .10). No SUV or total lesion choline uptake was significantly predictive for PFS, regardless of partial-volume correction. All PET semiquantitative parameters were significantly correlated with each other (P ≤ .013). The number of choline-avid metastases was a significant prognostic factor for progression after [ 18 F]-fluormethylcholine PET/CT-guided SBRT for recurrent oligometastatic prostate cancer, and there seemed to be a trend in risk of progression for patients with large MATVs. The lesional level of [ 18 F]-fluoromethylcholine uptake was not prognostic for progression. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination

    PubMed Central

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.

    2016-01-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658

  8. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination.

    PubMed

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R

    2016-11-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.

  9. Verifying the Presence of Low Levels of Neptunium in a Uranium Matrix with Electron Energy-Loss Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Douglas, Matthew; Wittman, Richard S.

    2010-01-01

    This paper examines the problems associated with the analysis of low levels of neptunium (Np) in a uranium (U) matrix with electron energy-loss spectroscopy (EELS) on the transmission electron microscope (TEM). The detection of Np in a matrix of uranium (U) can be impeded by the occurrence of a plural scattering event from U (U-M5 + U-O4,5) that results in severe overlap on the Np-M5 edge at 3665 eV. Low levels (1600 - 6300 ppm) of Np can be detected in U solids by confirming the energy gap between the Np-M5 and Np-M4 edges is at 184 eV and showingmore » that the M4/M5 ratio for the Np is smaller than that for U. The Richardson-Lucy deconvolution method was applied to energy-loss spectral images and was shown to increase the signal to noise. This method also improves the limits of detection for Np in a U matrix.« less

  10. An algorithm to estimate PBL heights from wind profiler data

    NASA Astrophysics Data System (ADS)

    Molod, A.; Salmun, H.

    2016-12-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourlyarchived wind profiler data from the NOAA Profiler Network (NPN) sites located throughoutthe central United States from the period 1992-2012. The long period of record allows ananalysis of climatological mean PBL heights as well as some estimates of year to yearvariability. Under clear conditions, summertime averaged hourly time series of PBL heightscompare well with Richardson-number based estimates at the few NPN stations with hourlytemperature measurements. Comparisons with clear sky MERRA estimates show that the windprofiler (WP) and the Richardson number based PBL heights are lower by approximately 250-500 m.The geographical distribution of daily maximum WP PBL heights corresponds well with theexpected distribution based on patterns of surface temperature and soil moisture. Windprofiler PBL heights were also estimated under mostly cloudy conditions, but the WP estimatesshow a smaller clear-cloudy condition difference than either of the other two PBL height estimates.The algorithm presented here is shown to provide a reliable summer, fall and springclimatology of daytime hourly PBL heights throughout the central United States. The reliabilityof the algorithm has prompted its use to obtain hourly PBL heights from other archived windprofiler data located throughout the world.

  11. 33 CFR 110.73c. - Okeechobee Waterway, St. Lucie River, Stuart, FL.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Okeechobee Waterway, St. Lucie..., St. Lucie River, Stuart, FL. The following is a special anchorage area: Beginning on the Okeechobee Intracoastal Waterway between mile marker 7 and 8 on the St. Lucie River, bounded by a line beginning at 27°12...

  12. Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data

    NASA Technical Reports Server (NTRS)

    Molod, Andrea M.; Salmun, H.; Dempsey, M

    2015-01-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.

  13. 78 FR 7670 - Safety Zone; Indian Street Bridge Construction, St. Lucie Canal, Palm City, FL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-04

    ... 1625-AA00 Safety Zone; Indian Street Bridge Construction, St. Lucie Canal, Palm City, FL AGENCY: Coast... zone on the St. Lucie Canal, Palm City, Florida to provide for the safety of life and vessels on a... on a narrow waterway. The temporary safety zone encompasses all waters of the St. Lucie Canal in the...

  14. 77 FR 13102 - Availability of the Final Environmental Impact Statement for the St. Lucie South Beach and Dune...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-05

    ... Environmental Impact Statement for the St. Lucie South Beach and Dune Restoration Project located in St. Lucie... Public notice was posted in a St. Lucie County newspaper, and mailed to current stakeholder lists with... comment period ending 5 p.m. July 18, 2011. A public comment meeting was held June 29, 2011 at the St...

  15. LucY: A Versatile New Fluorescent Reporter Protein

    PubMed Central

    Auldridge, Michele E.; Franz, Laura P.; Bingman, Craig A.; Yennamalli, Ragothaman M.; Phillips, George N.; Mead, David; Steinmetz, Eric J.

    2015-01-01

    We report on the discovery, isolation, and use of a novel yellow fluorescent protein. Lucigen Yellow (LucY) binds one FAD molecule within its core, thus shielding it from water and maintaining its structure so that fluorescence is 10-fold higher than freely soluble FAD. LucY displays excitation and emission spectra characteristic of FAD, with 3 excitation peaks at 276nm, 377nm, and 460nm and a single emission peak at 530nm. These excitation and emission maxima provide the large Stokes shift beneficial to fluorescence experimentation. LucY belongs to the MurB family of UDP-N-acetylenolpyruvylglucosamine reductases. The high resolution crystal structure shows that in contrast to other structurally resolved MurB enzymes, LucY does not contain a potentially quenching aromatic residue near the FAD isoalloxazine ring, which may explain its increased fluorescence over related proteins. Using E. coli as a system in which to develop LucY as a reporter, we show that it is amenable to circular permutation and use as a reporter of protein-protein interaction. Fragmentation between its distinct domains renders LucY non-fluorescent, but fluorescence can be partially restored by fusion of the fragments to interacting protein domains. Thus, LucY may find application in Protein-fragment Complementation Assays for evaluating protein-protein interactions. PMID:25906065

  16. LucY: A Versatile New Fluorescent Reporter Protein.

    PubMed

    Auldridge, Michele E; Cao, Hongnan; Sen, Saurabh; Franz, Laura P; Bingman, Craig A; Yennamalli, Ragothaman M; Phillips, George N; Mead, David; Steinmetz, Eric J

    2015-01-01

    We report on the discovery, isolation, and use of a novel yellow fluorescent protein. Lucigen Yellow (LucY) binds one FAD molecule within its core, thus shielding it from water and maintaining its structure so that fluorescence is 10-fold higher than freely soluble FAD. LucY displays excitation and emission spectra characteristic of FAD, with 3 excitation peaks at 276 nm, 377 nm, and 460 nm and a single emission peak at 530 nm. These excitation and emission maxima provide the large Stokes shift beneficial to fluorescence experimentation. LucY belongs to the MurB family of UDP-N-acetylenolpyruvylglucosamine reductases. The high resolution crystal structure shows that in contrast to other structurally resolved MurB enzymes, LucY does not contain a potentially quenching aromatic residue near the FAD isoalloxazine ring, which may explain its increased fluorescence over related proteins. Using E. coli as a system in which to develop LucY as a reporter, we show that it is amenable to circular permutation and use as a reporter of protein-protein interaction. Fragmentation between its distinct domains renders LucY non-fluorescent, but fluorescence can be partially restored by fusion of the fragments to interacting protein domains. Thus, LucY may find application in Protein-fragment Complementation Assays for evaluating protein-protein interactions.

  17. LucY: A versatile new fluorescent reporter protein

    DOE PAGES

    Auldridge, Michele E.; Cao, Hongnan; Sen, Saurabh; ...

    2015-04-23

    We report on the discovery, isolation, and use of a novel yellow fluorescent protein. Lucigen Yellow (LucY) binds one FAD molecule within its core, thus shielding it from water and maintaining its structure so that fluorescence is 10-fold higher than freely soluble FAD. LucY displays excitation and emission spectra characteristic of FAD, with 3 excitation peaks at 276nm, 377nm, and 460nm and a single emission peak at 530nm. These excitation and emission maxima provide the large Stokes shift beneficial to fluorescence experimentation. LucY belongs to the MurB family of UDP-N-acetylenolpyruvylglucosamine reductases. The high resolution crystal structure shows that in contrastmore » to other structurally resolved MurB enzymes, LucY does not contain a potentially quenching aromatic residue near the FAD isoalloxazine ring, which may explain its increased fluorescence over related proteins. Using E. coli as a system in which to develop LucY as a reporter, we show that it is amenable to circular permutation and use as a reporter of protein-protein interaction. Fragmentation between its distinct domains renders LucY non-fluorescent, but fluorescence can be partially restored by fusion of the fragments to interacting protein domains. Thus, LucY may find application in Protein-fragment Complementation Assays for evaluating protein-protein interactions.« less

  18. An experimental study on the near-source region of lazy turbulent plumes

    NASA Astrophysics Data System (ADS)

    Ciriello, Francesco; Hunt, Gary R.

    2017-11-01

    The near-source region of a `lazy' turbulent buoyant plume issuing from a circular source is examined for source Richardson numbers in the range of 101 to 107. New data is acquired for the radial contraction and streamwise variation of volume flux through an experimental programme of dye visualisations and particle image velocimetry. This data reveals the limited applicability of traditional entrainment laws used in integral modelling approaches for the description of the near-source region for these source Richardson numbers. A revised entrainment function is proposed, based on which we introduce a classification of plume behaviour whereby the degree of `laziness' may be expressed in terms of the excess dilution that occurs compared to a `pure' constant Richardson number plume. The increased entrainment measured in lazy plumes is attributed to Rayleigh-Taylor instabilities developing along the contraction of the plume which promote the additional engulfment of ambient fluid into the plume. This work was funded by an EPSRC Industial Case Award sponsored by Dyson Technology Ltd. Special thanks go to the members of the Dyson Environmental Control Group that regularly visit us in Cambridge for discussions about our work.

  19. BP Piscium: its flaring disc imaged with SPHERE/ZIMPOL★

    NASA Astrophysics Data System (ADS)

    de Boer, J.; Girard, J. H.; Canovas, H.; Min, M.; Sitko, M.; Ginski, C.; Jeffers, S. V.; Mawet, D.; Milli, J.; Rodenhuis, M.; Snik, F.; Keller, C. U.

    2017-03-01

    Whether BP Piscium (BP Psc) is either a pre-main sequence T Tauri star at d ≈ 80 pc, or a post-main sequence G giant at d ≈ 300 pc is still not clear. As a first-ascent giant, it is the first to be observed with a molecular and dust disc. Alternatively, BP Psc would be among the nearest T Tauri stars with a protoplanetary disc (PPD). We investigate whether the disc geometry resembles typical PPDs, by comparing polarimetric images with radiative transfer models. Our Very Large Telescope/Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE)/Zurich IMaging Polarimeter (ZIMPOL) observations allow us to perform polarimetric differential imaging, reference star differential imaging, and Richardson-Lucy deconvolution. We present the first visible light polarization and intensity images of the disc of BP Psc. Our deconvolution confirms the disc shape as detected before, mainly showing the southern side of the disc. In polarized intensity the disc is imaged at larger detail and also shows the northern side, giving it the typical shape of high-inclination flared discs. We explain the observed disc features by retrieving the large-scale geometry with MCMAX radiative transfer modelling, which yields a strongly flared model, atypical for discs of T Tauri stars.

  20. Supermassive Black Holes with High Accretion Rates in Active Galactic Nuclei. VI. Velocity-resolved Reverberation Mapping of the Hβ Line

    NASA Astrophysics Data System (ADS)

    Du, Pu; Lu, Kai-Xing; Hu, Chen; Qiu, Jie; Li, Yan-Rong; Huang, Ying-Ke; Wang, Fang; Bai, Jin-Ming; Bian, Wei-Hao; Yuan, Ye-Fei; Ho, Luis C.; Wang, Jian-Min; SEAMBH Collaboration

    2016-03-01

    In the sixth of a series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of Hβ emission lines for nine objects observed in the campaign during 2012-2013. In order to correct the line broadening caused by seeing and instruments before analyzing the velocity-resolved RM, we adopt the Richardson-Lucy deconvolution to reconstruct their Hβ profiles. The validity and effectiveness of the deconvolution are checked using Monte Carlo simulation. Five among the nine objects show clear dependence of the time delay on velocity. Mrk 335 and Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with having virialized motions. The lags of the remaining four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diverse kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.

  1. LUCY: A New Path to Diversity

    ERIC Educational Resources Information Center

    Marrah, Arleezah; Mills, Roxanne

    2011-01-01

    This article describes the Librarianship Upgrades for Children and Youth Services (LUCY), a multifaceted multicultural continuing education program for librarians developed by the Library and Information Science Program at Old Dominion University. The Institute of Museum and Library Services (IMLS) funds LUCY through the Laura Bush 21st century…

  2. 75 FR 79293 - Amendment and Revocation of Class E Airspace; Vero Beach, FL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-20

    ... geographic coordinates of the St. Lucie County International Airport to aid in the navigation of our National... the National Aeronautical Navigation Services to update the geographic coordinates of the St. Lucie.... Also, this action will update the geographic coordinates of the St. Lucie County International Airport...

  3. 75 FR 75941 - Proposed Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... mile None +16 upstream of Peterson Road. Howard Creek Approximately 1,635 None +6 City of Port St... Road........ None +19 City of Port St. Lucie, Unincorporated Areas of St. Lucie County. [[Page 75943... 34950. City of Port St. Lucie Maps are available for inspection at City Hall, 121 Southwest Port St...

  4. Oblique view of southwest and southeast side of northwest machinery ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique view of southwest and southeast side of northwest machinery house and hydro-electric power house, concrete pylon at upstream entrance to lock in foreground, view towards north - St. Lucie Canal, St. Lucie Lock No. 1, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  5. Estimation of primordial spectrum with post-WMAP 3-year data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun

    2008-07-15

    In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less

  6. Primordial power spectrum from Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun, E-mail: dhiraj@apctp.org, E-mail: arman@apctp.org, E-mail: tarun@iucaa.ernet.in

    2014-11-01

    Using modified Richardson-Lucy algorithm we reconstruct the primordial power spectrum (PPS) from Planck Cosmic Microwave Background (CMB) temperature anisotropy data. In our analysis we use different combinations of angular power spectra from Planck to reconstruct the shape of the primordial power spectrum and locate possible features. Performing an extensive error analysis we found the dip near ℓ ∼ 750–850 represents the most prominent feature in the data. Feature near ℓ ∼ 1800–2000 is detectable with high confidence only in 217 GHz spectrum and is apparently consequence of a small systematic as described in the revised Planck 2013 papers. Fixing the background cosmological parameters andmore » the foreground nuisance parameters to their best fit baseline values, we report that the best fit power law primordial power spectrum is consistent with the reconstructed form of the PPS at 2σ C.L. of the estimated errors (apart from the local features mentioned above). As a consistency test, we found the reconstructed primordial power spectrum from Planck temperature data can also substantially improve the fit to WMAP-9 angular power spectrum data (with respect to power-law form of the PPS) allowing an overall amplitude shift of ∼ 2.5%. In this context low-ℓ and 100 GHz spectrum from Planck which have proper overlap in the multipole range with WMAP data found to be completely consistent with WMAP-9 (allowing amplitude shift). As another important result of our analysis we do report the evidence of gravitational lensing through the reconstruction analysis. Finally we present two smooth form of the PPS containing only the important features. These smooth forms of PPS can provide significant improvements in fitting the data (with respect to the power law PPS) and can be helpful to give hints for inflationary model building.« less

  7. The Day Autherine Lucy Dared To Integrate the University of Alabama.

    ERIC Educational Resources Information Center

    McWhorter, Diane

    2001-01-01

    Responding to a federal court order to integrate, the University of Alabama admitted Autherine Lucy in 1956. On campus, she was pelted with eggs and threatened with death. After staying locked in a university hall for 3 hours, she was taken away by the police. The following day, trustees suspended Lucy "for her own safety." (SM)

  8. Breeding biology of Lucy's Warbler in southwestern New Mexico

    Treesearch

    Scott H. Stoleson; Roland S. Shook; Deborah M. Finch

    2000-01-01

    We found Lucy's Warblers breeding abundantly in mid-elevation broadleaf riparian forests in the lower Gila River valley of southwestern New Mexico. They arrived en masse in the third week of March. Patterns of singing suggested that Lucy's Warblers might raise two broods. Few were heard or seen after late July. Estimated population densities ranged from 1. 7...

  9. Distant view from downstream of lock with southeast machinery house, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Distant view from downstream of lock with southeast machinery house, SF 109, and timber guide wall on left, exterior view of closed lower lock gates and hydro-electric power house and dam in background, view towards west - St. Lucie Canal, St. Lucie Lock No. 1, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  10. Lagrangian statistics of turbulent dispersion from 81923 direct numerical simulation of isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Buaria, Dhawal; Yeung, P. K.; Sawford, B. L.

    2016-11-01

    An efficient massively parallel algorithm has allowed us to obtain the trajectories of 300 million fluid particles in an 81923 simulation of isotropic turbulence at Taylor-scale Reynolds number 1300. Conditional single-particle statistics are used to investigate the effect of extreme events in dissipation and enstrophy on turbulent dispersion. The statistics of pairs and tetrads, both forward and backward in time, are obtained via post-processing of single-particle trajectories. For tetrads, since memory of shape is known to be short, we focus, for convenience, on samples which are initially regular, with all sides of comparable length. The statistics of tetrad size show similar behavior as the two-particle relative dispersion, i.e., stronger backward dispersion at intermediate times with larger backward Richardson constant. In contrast, the statistics of tetrad shape show more robust inertial range scaling, in both forward and backward frames. However, the distortion of shape is stronger for backward dispersion. Our results suggest that the Reynolds number reached in this work is sufficient to settle some long-standing questions concerning Lagrangian scale similarity. Supported by NSF Grants CBET-1235906 and ACI-1036170.

  11. Spectral multigrid methods for the solution of homogeneous turbulence problems

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.

    1987-01-01

    New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.

  12. Multilingual Manipulation and Humor in "I Love Lucy"

    ERIC Educational Resources Information Center

    Kirschen, Bryan

    2013-01-01

    "I Love Lucy" is considered to have been one of the most humorous television programs in the United States as early as the 1950s. This paper explores the use of language by the protagonists, Lucy and Ricky Ricardo, in order to understand the source of the program's humor. Linguistic analysis of the Ricardos' speech is applied,…

  13. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  14. The genital prolapse of Australopithecus Lucy?

    PubMed

    Chene, Gautier; Lamblin, Gery; Lebail-Carval, Karine; Chabert, Philippe; Marès, Pierre; Coppens, Yves; Mellier, Georges

    2015-07-01

    The female bony pelvis has to fulfil opposing functions: it has to be sufficiently closed to support the pelvic viscera in the upright position, while remaining sufficiently open to allow vaginal delivery. We aim to give an evolutionary perspective and the possible evolution of the bony pelvis from Lucy to the modern female with the implications in terms of genital prolapse. Thirteen pelvimetric measurements were performed on 178 bony pelves: 1 fossil pelvis from Australopithecus Lucy, 128 female Caucasian modern adult pelves and 49 female Catarrhine pelves (29 gorillas and 20 chimpanzees). Lucy's pelvis shape was the most transversely oval, short and broad, termed platypelloid. Modern female pelves were transversely oval only at the inlet. A protruding ischial spine, fairly small ischial tuberosities and a sacral concavity made Lucy closer to Homo sapiens and less like the great apes. In the last group, pelvic planes were anteroposteriorly oval, except in the gorilla, where the outlet was round or slightly transversely oval. The subpubic angle was narrowest in Lucy, whereas it was greater than 90° in the great apes. The female pelvis is involved in both visceral support and parturition and represents a compromise. The narrower pelvis of Australopithecus Lucy provided protection against genital prolapse, but resulted in complex obstetrical mechanics. From an evolutionary perspective, the pelvis of Homo sapiens became modified to make parturition easier, but increased the risk of genital prolapse: the ilia became wide open laterally and the sacrum broadened with a shorter distance between the sacroiliac and coxofemoral joints.

  15. Deep Search for Satellites Around the Lucy Mission Targets

    NASA Astrophysics Data System (ADS)

    Noll, Keith

    2017-08-01

    By performing the first deep search for Trojan satellites with HST we will obtain unique constraints on satellite-forming processes in this population. We have selected the targets from NASA's Lucy mission because they represent a taxonomically and physically diverse set of targets that allow intercomparisons from a small survey. Also, by searching now to identify any orbiting material around the Lucy targets, it will be possible impact hardware decisions and plan for maximum scientific return from the mission. This search also is a necessary step to assure mission safety as the Lucy spacecraft will fly within 1000 km of the targets, well within the region where stable orbits can exist.

  16. The MITLL NIST LRE 2015 Language Recognition System

    DTIC Science & Technology

    2016-05-06

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...most recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission...Task The National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In

  17. The MITLL NIST LRE 2015 Language Recognition system

    DTIC Science & Technology

    2016-02-05

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission features a...National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In previous

  18. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.

  19. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  20. Upconversion microparticles as time-resolved luminescent probes for multiphoton microscopy: desired signal extraction from the streaking effect

    NASA Astrophysics Data System (ADS)

    Pominova, Daria V.; Ryabova, Anastasia V.; Grachev, Pavel V.; Romanishkin, Igor D.; Kuznetsov, Sergei V.; Rozhnova, Julia A.; Yasyrkina, Daria S.; Fedorov, Pavel P.; Loschenov, Victor B.

    2016-09-01

    The great interest in upconversion nanoparticles exists due to their high efficiency under multiphoton excitation. However, when these particles are used in scanning microscopy, the upconversion luminescence causes a streaking effect due to the long lifetime. This article describes a method of upconversion microparticle luminescence lifetime determination with help of modified Lucy-Richardson deconvolution of laser scanning microscope (LSM) image obtained under near-IR excitation using nondescanned detectors. Determination of the upconversion luminescence intensity and the decay time of separate microparticles was done by intensity profile along the image fast scan axis approximation. We studied upconversion submicroparticles based on fluoride hosts doped with Yb3+-Er3+ and Yb3+-Tm3+ rare earth ion pairs, and the characteristic decay times were 0.1 to 1.5 ms. We also compared the results of LSM measurements with the photon counting method results; the spread of values was about 13% and was associated with the approximation error. Data obtained from live cells showed the possibility of distinguishing the position of upconversion submicroparticles inside and outside the cells by the difference of their lifetime. The proposed technique allows using the upconversion microparticles without shells as probes for the presence of OH- ions and CO2 molecules.

  1. Application of a deconvolution method for identifying burst amplitudes and arrival times in Alcator C-Mod far SOL plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim

    2017-10-01

    In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.

  2. Lucy: Surveying the diversity of Trojans

    NASA Astrophysics Data System (ADS)

    Levison, H.; Olkin, C.; Noll, K.; Marchi, S.

    2017-09-01

    The Lucy mission, selected as part of NASA's Discovery Program, is the first reconnaissance of the Jupiter Trojans, objects that hold vital clues to deciphering the history of the Solar System. Due to an unusual and fortuitous orbital configuration, Lucy, will perform a comprehensive investigation that visits six of these primitive bodies, covering both the L4 and L5 swarms, all the known taxonomic types, the largest remnant of a catastrophic collision, and a nearly equal mass binary. It will use a suite of high-heritage remote sensing instruments to map geologic, surface color and composition, thermal and other physical properties of its targets at close range. Lucy, like the human fossil for which it is named, will revolutionize the understanding of our origins.

  3. MSR performance enhancements and modifications at St. Lucie Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubano, V.F.; Ugelow, A.G.; Menocal, A.G.

    1989-01-01

    The St. Lucie Power Plant provides an excellent historical prospective on various moisture separator/reheater improvements. Between the two essentially identical units there is a total of 14 years of operating experience with various moisture separator/reheater configurations, with a combination of four different heat transfer surfaces and three moisture removal configurations. Through various modifications and enhancements, the performance and the reliability of the moisture separator/reheaters at the St. Lucie Power Plant and consequently the overall plant performance has been improved. This improvement has taken place over several years and involves changes in both the heat transfer and moisture removal areas. Thismore » paper provides an overview of the history and description of moisture separator/reheater modifications at the St. Lucie Power Plant with the resulting performance improvements.« less

  4. CRANE WINCH MECHANISM, UPPER LEVEL OF HYDROELECTRIC POWER HOUSE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CRANE WINCH MECHANISM, UPPER LEVEL OF HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  5. ELECTRICAL SWITCHBOARD IN UPPER LEVEL OF HYDROELECTRIC POWER HOUSE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ELECTRICAL SWITCHBOARD IN UPPER LEVEL OF HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  6. Numerical study of entropy generation due to coupled laminar and turbulent mixed convection and thermal radiation in an enclosure filled with a semitransparent medium.

    PubMed

    Goodarzi, M; Safaei, M R; Oztop, Hakan F; Karimipour, A; Sadeghinezhad, E; Dahari, M; Kazi, S N; Jomhari, N

    2014-01-01

    The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (10⁴) and turbulent flow (10⁸). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible.

  7. Numerical Study of Entropy Generation due to Coupled Laminar and Turbulent Mixed Convection and Thermal Radiation in an Enclosure Filled with a Semitransparent Medium

    PubMed Central

    Goodarzi, M.; Safaei, M. R.; Oztop, Hakan F.; Karimipour, A.; Sadeghinezhad, E.; Dahari, M.; Kazi, S. N.; Jomhari, N.

    2014-01-01

    The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (104) and turbulent flow (108). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible. PMID:24778601

  8. Orbit of the Patroclus-Menoetius Binary, a Lucy Mission Target

    NASA Astrophysics Data System (ADS)

    Noll, Keith

    2016-10-01

    We are proposing to observe Trojan binary asteroid (617) Patroclus-Menoetius, one of the targets of the Lucy mission. Lucy was selected as the next Discovery mission on January 4, 2017, for launch in October 2021. Observations this year are needed to establish the mutual orbit of the binary, which is of critical importance for mission planning. The mutual orbit phase is essentially undetermined from the accumulation of orbit period uncertainty since last measured in 2010. Orbital phase is needed in order to be able to predict the timing of mutual events that will begin late in 2017. These mutual events are essential to planning for the Lucy mission, especially in establishing the precise orientation of the mutual orbit plane and ascending node that is critical to early planning for flyby encounter design and capabilities.

  9. On a multigrid method for the coupled Stokes and porous media flow problem

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  10. Diode laser vaporisation of the prostate vs. diode laser under cold irrigation: A randomised control trial.

    PubMed

    Pillai, Ravisankar G; Al Naieb, Ziad; Angamuthu, Stephen; Mundackal, Tintu

    2014-12-01

    To compare the perioperative morbidity and early follow-up after diode laser vaporisation of the prostate (LVP) and its modification, diode laser under cold irrigation (LUCI) in patients with symptomatic benign prostatic hyperplasia, as the main disadvantages of LVP are the postoperative pain, dysuria and storage urinary symptoms. This was a single-centre prospective randomised control trial in which 100 patients were randomised to receive LVP (50) or LUCI (50) from June 2011 until July 2012. LUCI is similar to LVP except that it is done under normal irrigation with saline at 4 °C instead of saline at room temperature. The primary outcome measures were the International Prostate Symptom Score (IPSS), IPSS-Dysuria, a pain scale (PS), maximum flow rate (Q max), a quality-of-life (QoL) score and the postvoid residual urine volume (PVR) after 1 month, then the IPSS, Q max, QoL, and PVR at 3 and 12 months. Secondary outcomes included intraoperative surgical variables, e.g., the decline in core temperature, bleeding, peri- and postoperative morbidity. The baseline characteristics of both groups were similar. For the primary outcome measures, there was a statistically significant difference between the groups in all variables except Q max after 1 month, in favour of LUCI. The mean (SD) IPSS at 1 month in the LVP group was 8.97 (1.68), statistically significantly different from that after LUCI, of 6.89 (1.5) (P < 0.05). The mean IPSS-Dysuria at 1 month was also significantly, at -2.32 (0.91) for LVP and 3.54 (1.07) for LUCI (P < 0.05). The respective mean PS at 1 month was 7.84 (2.92) and 5.7 (2.1) (P < 0.05). The QoL and PVR at 1 month were also significantly different. Within the first month 17% of patients in the LVP group and 4% in the LUCI group complained of transient urgency or stress incontinence, and this difference was statistically significant (P < 0.05). There was no significant bleeding in either group. The mean operative time or applied energy of LVP was not statistically significant from that of LUCI, and there was no significant difference in the decline in core temperature between the groups (P > 0.05). LUCI is a good modification for reducing the pain, dysuria and storage symptoms associated with LVP. The procedure appears to be safe, with no significant decrease in core temperature in either group.

  11. 33 CFR 110.73c - Okeechobee Waterway, St. Lucie River, Stuart, FL.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... HOMELAND SECURITY ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.73c Okeechobee Waterway, St. Lucie River, Stuart, FL. The following is a special anchorage area: Beginning on the Okeechobee...

  12. 33 CFR 110.73c - Okeechobee Waterway, St. Lucie River, Stuart, FL.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... HOMELAND SECURITY ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.73c Okeechobee Waterway, St. Lucie River, Stuart, FL. The following is a special anchorage area: Beginning on the Okeechobee...

  13. 33 CFR 110.73c - Okeechobee Waterway, St. Lucie River, Stuart, FL.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... HOMELAND SECURITY ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.73c Okeechobee Waterway, St. Lucie River, Stuart, FL. The following is a special anchorage area: Beginning on the Okeechobee...

  14. 33 CFR 110.73c. - Okeechobee Waterway, St. Lucie River, Stuart, FL.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... HOMELAND SECURITY ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.73c. Okeechobee Waterway, St. Lucie River, Stuart, FL. The following is a special anchorage area: Beginning on the Okeechobee...

  15. VIEW OF SOUTHEAST SIDE OF HYDROELECTRIC POWER HOUSE, VIEW TOWARDS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF SOUTHEAST SIDE OF HYDROELECTRIC POWER HOUSE, VIEW TOWARDS NORTHWEST - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  16. PLANAR VIEW OF NORTHWEST SIDE OF HYDROELECTRIC POWER HOUSE, VIEW ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PLANAR VIEW OF NORTHWEST SIDE OF HYDROELECTRIC POWER HOUSE, VIEW TOWARDS SOUTHEAST - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  17. HANDMADE WOODEN RACK FOR TOOL STORE, LOWER LEVEL OF HYDROELECTRIC ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    HANDMADE WOODEN RACK FOR TOOL STORE, LOWER LEVEL OF HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  18. HARNESS END OF ELECTRIC TURBINE IN LOWER LEVEL OF HYDROELECTRIC ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    HARNESS END OF ELECTRIC TURBINE IN LOWER LEVEL OF HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  19. DETAIL INTERIOR VIEW OF CONTROL PANEL IN CONTROL STATION, VIEW ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL INTERIOR VIEW OF CONTROL PANEL IN CONTROL STATION, VIEW TOWARDS WEST - St. Lucie Canal, Lock No. 1, Control Station, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  20. OBLIQUE VIEW OF NORTHEAST AND SOUTHEAST SIDES OF HYDROELECTRIC POWER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF NORTHEAST AND SOUTHEAST SIDES OF HYDROELECTRIC POWER HOUSE, VIEW TOWARDS WEST - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  1. DETAIL INTERIOR VIEW OF ELECTRIC GENERATOR ON UPPER LEVEL ON ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL INTERIOR VIEW OF ELECTRIC GENERATOR ON UPPER LEVEL ON HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  2. Pueblo Pottery: Continuity and Change. Lucy Lewis.

    ERIC Educational Resources Information Center

    Herzog, Melanie

    1991-01-01

    Describes Lucy Lewis' ceramic work which is inspired by the ancient pottery of her Acoma Pueblo artistic heritage. Discusses concepts of tradition, artistic heritage, and change over time. Outlines related ceramic and discussion activities for elementary and secondary students. (KM)

  3. V694 Mon (MWC 560) spectroscopy requested

    NASA Astrophysics Data System (ADS)

    Waagen, Elizabeth O.

    2017-05-01

    The observing campaign from 2016 on V694 Mon (MWC 560) (AAVSO Alert Notice 538) has been continued, but with different requirements. Photometry is no longer specifically requested on a regular basis (although ongoing observations that do not interfere with other obligations are welcome). Spectroscopy on a cadence of a week or two is requested to monitor changes in the disk outflow. Investigator Adrian Lucy writes: "Adrian Lucy and Dr. Jeno Sokoloski (Columbia University) have requested spectroscopic monitoring of the broad-absorption-line symbiotic star V694 Mon (MWC 560), as a follow-up to coordinated multi-wavelength observations obtained during its recent outburst (ATel #8653, #8832, #8957; #10281). This system is a perfect place in which to study the relationship between an accretion disk and disk winds/jets, and a high-value target for which even low-resolution spectra can be extraordinarily useful...Optical brightening in MWC 560 tends to predict higher-velocity absorption, but sometimes jumps in absorption velocity also appear during optical quiescence (e.g., Iijima 2001, ASPCS, 242, 187). If such a velocity jump occurs during photometric quiescence, it may prompt radio observations to confirm and test the proposed outflow origin for recently-discovered flat-spectrum radio emission (Lucy et al. ATel #10281)...Furthermore, volunteer spectroscopic monitoring of this system has proved useful in unpredictable ways. For example, 'amateur' spectra obtained by Somogyi Péter in 2015 December demonstrated that the velocity of absorption was very low only a month before an optical outburst peak prompted absorption troughs up to 3000 km/s, which constrains very well the timing of the changes to the outflow to a degree that would not have been otherwise possible. Any resolution can be useful. A wavelength range that can accommodate a blueshift of at least 140 angstroms (6000 km/s) from the rest wavelengths of H-alpha at 6562 angstroms and/or H-beta at 4861 angstroms is ideal, though spectra with a smaller range can still be useful. Photometry could potentially still be useful, but will be supplementary to medium-cadence photometry being collected by the ANS collaboration." "Spectroscopy may be uploaded to the ARAS database (http://www.astrosurf.com/aras/Aras_DataBase/DataBase.htm), or sent to Adrian and Jeno directly at . Finder charts with sequence may be created using the AAVSO Variable Star Plotter (https://www.aavso.org/vsp). Photometry should be submitted to the AAVSO International Database. See full Special Notice for more details.

  4. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  5. TomoTherapy MLC verification using exit detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Quan; Westerly, David; Fang Zhenyu

    2012-01-15

    Purpose: Treatment delivery verification (DV) is important in the field of intensity modulated radiation therapy (IMRT). While IMRT and image guided radiation therapy (IGRT), allow us to create more conformal plans and enables the use of tighter margins, an erroneously executed plan can have detrimental effects on the treatment outcome. The purpose of this study is to develop a DV technique to verify TomoTherapy's multileaf collimator (MLC) using the onboard mega-voltage CT detectors. Methods: The proposed DV method uses temporal changes in the MVCT detector signal to predict actual leaf open times delivered on the treatment machine. Penumbra and scatteredmore » radiation effects may produce confounding results when determining leaf open times from the raw detector data. To reduce the impact of the effects, an iterative, Richardson-Lucy (R-L) deconvolution algorithm is applied. Optical sensors installed on each MLC leaf are used to verify the accuracy of the DV technique. The robustness of the DV technique is examined by introducing different attenuation materials in the beam. Additionally, the DV technique has been used to investigate several clinical plans which failed to pass delivery quality assurance (DQA) and was successful in identifying MLC timing discrepancies as the root cause. Results: The leaf open time extracted from the exit detector showed good agreement with the optical sensors under a variety of conditions. Detector-measured leaf open times agreed with optical sensor data to within 0.2 ms, and 99% of the results agreed within 8.5 ms. These results changed little when attenuation was added in the beam. For the clinical plans failing DQA, the dose calculated from reconstructed leaf open times played an instrumental role in discovering the root-cause of the problem. Throughout the retrospective study, it is found that the reconstructed dose always agrees with measured doses to within 1%. Conclusions: The exit detectors in the TomoTherapy treatment systems can provide valuable information about MLC behavior during delivery. A technique to estimate the TomoTherapy binary MLC leaf open time from exit detector signals is described. This technique is shown to be both robust and accurate for delivery verification.« less

  6. Improving the Accuracy and Scalability of Discriminative Learning Methods for Markov Logic Networks

    DTIC Science & Technology

    2011-05-01

    9 2.2 Inductive Logic Programming and Aleph . . . . . . . . . . . . 10 2.3 MLNs and Alchemy ...positive examples. Aleph allows users to customize each of 10 these steps, and thereby supports a variety of specific algorithms. 2.3 MLNs and Alchemy An...tural motifs. By limiting the search to each unique motif, LSM is able to find good clauses in an efficient manner. Alchemy (Kok, Singla, Richardson

  7. Ground Observation of Asteroids at Mission ETA

    NASA Astrophysics Data System (ADS)

    Paganelli, F.; Conrad, A.

    2018-04-01

    We focused on Lucy's targeted asteroids to derive information for best ground-based observation at mission ETA. We used a workflow for data extraction through JPL Horizons considering the LBT-MODS 1. Results outline opportunities suitable during close approach of Lucy ETA.

  8. OBLIQUE VIEW OF NORTHWEST AND NORTHEAST SIDES OF HYDROELECTRIC POWER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF NORTHWEST AND NORTHEAST SIDES OF HYDROELECTRIC POWER HOUSE, OLD BYPASS IN BACKGROUND, VIEW TOWARDS SOUTH - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  9. DETAIL OF WESTINGHOUSE AND B. MORGAN SMITH NAMEPLATES ON ELECTRIC ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF WESTINGHOUSE AND B. MORGAN SMITH NAMEPLATES ON ELECTRIC GENERATOR IN UPPER LEVEL OF HYDROELECTRIC POWER HOUSE - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  10. 76 FR 77563 - Florida Power & Light Company; St. Lucie Plant, Unit No. 1; Exemption

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ....2, because the P-T limits developed for St. Lucie, Unit 1, use a finite element method to determine... Code for calculating K Im factors, and instead applies FEM [finite element modeling] methods for...

  11. OBLIQUE VIEW OF SOUTHEAST AND SOUTHWEST SIDES OF UPPER GATE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF SOUTHEAST AND SOUTHWEST SIDES OF UPPER GATE MACHINERY HOUSES WITH HYDROELECTRIC POWER HOUSE IN BACKGROUND - St. Lucie Canal, Lock No. 1, Machinery Houses, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  12. Lucy: Navigating a Jupiter Trojan Tour

    NASA Technical Reports Server (NTRS)

    Stanbridge, Dale; Williams, Ken; Williams, Bobby; Jackman, Coralie; Weaver, Hal; Berry, Kevin; Sutter, Brian; Englander, Jacob

    2017-01-01

    In January 2017, NASA selected the Lucy mission to explore six Jupiter Trojan asteroids. These six bodies, remnants of the primordial material that formed the outer planets, were captured in the Sun-Jupiter L4 and L5 Lagrangian regions early in the solar system formation. These particular bodies were chosen because of their diverse spectral properties and the chance to observe up close for the first time two orbiting approximately equal mass binaries, Patroclus and Menoetius. KinetX, Inc. is the primary navigation supplier for the Lucy mission. This paper describes preliminary navigation analyses of the approach phase for each Trojan encounter.

  13. The Lund University Checklist for Incipient Exhaustion-a cross-sectional comparison of a new instrument with similar contemporary tools.

    PubMed

    Persson, Roger; Österberg, Kai; Viborg, Njördur; Jönsson, Peter; Tenenbaum, Artur

    2016-04-21

    Stress-related health problems (e.g., work-related exhaustion) are a societal concern in many postindustrial countries. Experience suggests that early detection and intervention are crucial in preventing long-term negative consequences. In the present study, we benchmark a new tool for early identification of work-related exhaustion-the Lund University Checklist for Incipient Exhaustion (LUCIE)-against other contextually relevant inventories and two contemporary Swedish screening scales. A cross-sectional population sample (n = 1355) completed: LUCIE, Karolinska Exhaustion Disorder Scale (KEDS), Self-reported Exhaustion Disorder Scale (s-ED), Shirom-Melamed Burnout Questionnaire (SMBQ), Utrecht Work Engagement Scale (UWES-9), Job Content Questionnaire (JCQ), Big Five Inventory (BFI), and items concerning work-family interference and stress in private life. Increasing signs of exhaustion on LUCIE were positively associated with signs of exhaustion on KEDS and s-ED. The prevalence rates were 13.4, 13.8 and 7.8 %, respectively (3.8 % were identified by all three instruments). Increasing signs of exhaustion on LUCIE were also positively associated with reports of burnout, job demands, stress in private life, family-to-work interference and neuroticism as well as negatively associated with reports of job control, job support and work engagement. LUCIE, which is intended to detect pre-stages of ED, exhibits logical and coherent positive relations with KEDS and s-ED as well as other conceptually similar inventories. The results suggest that LUCIE has the potential to detect mild states of exhaustion (possibly representing pre-stages to ED) that if not brought to the attention of the healthcare system and treated, may develop in to ED. The prospective validity remains to be evaluated.

  14. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  15. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  16. Prospects for Near Ultraviolet Astronomical Observations from the Lunar Surface — LUCI

    NASA Astrophysics Data System (ADS)

    Mathew, J.; Kumar, B.; Sarpotdar, M.; Suresh, A.; Nirmal, K.; Sreejith, A. G.; Safonova, M.; Murthy, J.; Brosch, N.

    2018-04-01

    We have explored the prospects for UV observations from the lunar surface and developed a UV telescope (LUCI-Lunar Ultraviolet Cosmic Imager) to put on the Moon, with the aim to detect bright UV transients such as SNe, novae, TDE, etc.

  17. OBLIQUE VIEW OF NORTHWEST SIDE OF HYDROELECTRIC POWER HOUSE AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF NORTHWEST SIDE OF HYDROELECTRIC POWER HOUSE AND INTERIOR OF SOUTHWEST CORNER OF OLD BYPASS IN FOREGROUND, VIEW TOWARDS SOUTHWEST - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  18. Face off: searching for truth and beauty in the clinical encounter. Based on the memoir, autobiography of a face by Lucy Grealy.

    PubMed

    Shannon, Mary T

    2012-08-01

    Based on Lucy Grealy's memoir, Autobiography of a Face, this article explores the relationship between gender and illness in our culture, as well as the paradox of "intimacy without intimacy" in the clinical encounter. Included is a brief review of how authenticity, vulnerability, and mutual recognition of suffering can foster the kind of empathic doctor-patient relationship that Lucy Grealy sorely needed, but never received. As she says at the end of her memoir, "All those years I'd handed my ugliness over to people, and seen only the different ways it was reflected back to me."

  19. Advanced Source Deconvolution Methods for Compton Telescopes

    NASA Astrophysics Data System (ADS)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.

  20. Lucy Wheelock: Her Life and Work.

    ERIC Educational Resources Information Center

    DuCharme, Catherine C.

    2000-01-01

    Examines early influences of educator Lucy Wheelock's life, her contributions to the kindergarten movement, and her legacy to early childhood educators today. Discusses her early education, preparation as a kindergarten teacher, and her efforts to spread the "gospel of Froebel." Maintains that Wheelock's legacy is her belief in the…

  1. Working at a Joint-Use Library

    ERIC Educational Resources Information Center

    Robinson, Carla

    2007-01-01

    The St. Lucie West Library, also known as the FAU Treasure Coast Campus Library, is a joint-use library facility, with Florida Atlantic University partnering with Indian River Community College and the St. Lucie County (FL) Library System. This article will discuss the circulation, course reserves, interlibrary loan, and collection management…

  2. Who Was Lucy Sprague Mitchell...And Why Should You Know?

    ERIC Educational Resources Information Center

    Smith, Mary K.

    2000-01-01

    Recounts the life and accomplishments of educator and writer Lucy Sprague Mitchell, suggesting that her life can bring inspiration and renewed vigor to today's early childhood educators. Considers the social conditions of Mitchell's era, her formative years, formal education and career decisions, participation in the Progressive Movement and…

  3. 75 FR 4839 - Endangered and Threatened Wildlife and Plants; Permit, St. Lucie County, FL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ...] Endangered and Threatened Wildlife and Plants; Permit, St. Lucie County, FL AGENCY: Fish and Wildlife Service...-jay) breeding, feeding, and sheltering habitat incidental to lot preparation for the construction of a...)) of Florida scrub-jay breeding, feeding and sheltering habitat incidental to land preparation for...

  4. OBLIQUE VIEW OF SOUTHWEST AND SOUTHEAST SIDES OF UPPER GATE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF SOUTHWEST AND SOUTHEAST SIDES OF UPPER GATE MACHINERY HOUSE, NORTH OF LOCK, WITH HYDRO-ELECTRIC POWER HOUSE AND DAM IN BACKGROUND, VIEW TOWARD NORTH NORTHWEST - St. Lucie Canal, Lock No. 1, Machinery Houses, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  5. Lucy Maynard Salmon (1853-1927): Pioneering Views on Teaching History.

    ERIC Educational Resources Information Center

    Nelson, Murry R.

    1996-01-01

    Profiles the career and contributions of educator and historian Lucy Maynard Salmon. Salmon's work on uniform standards for college admission became the basis of the College Entrance Examination. She developed a curriculum for elementary school history instruction that incorporated classical literature, folk tales, and biographies. (MJP)

  6. Resolved Observations of the Patroclus-Menoetius Binary

    NASA Astrophysics Data System (ADS)

    Noll, Keith S.; Grundy, William M.; Buie, Marc W.; Levison, Harold F.

    2017-10-01

    The Trojan binary (617) Patroclus-Menoetius is one of the targets of the Lucy Discovery mission. Lucy is scheduled to launch in October 2021. We observed this system with the Hubble Space Telescope in May and June 2017 in order to resolve the individual components and use the relative positions to update the binary orbit. The updated orbit is required to predict the upcoming mutual event season. A precise determination of the orbit phase, period, orbit plane and pole position that will result from observations of mutual events is essential for planning the Lucy mission’s encounter with this system. We present results of the successful HST observations including preliminary predictions for mutual events observable in semester 2018A.

  7. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  8. VERTICAL DETAIL OBLIQUE VIEW OF NORTHEAST SIDE OF HYDROELECTRIC POWER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VERTICAL DETAIL OBLIQUE VIEW OF NORTHEAST SIDE OF HYDROELECTRIC POWER HOUSE WITH OLD BYPASS IN FOREGROUND, SHOWING GLASS BLOCKS PROVIDING LIGHT TO BASEMENT OF HYDROELECTRIC POWER HOUSE, VIEW TOWARDS WEST SOUTHWEST - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  9. OBLIQUE VIEW OF SOUTHWEST AND SOUTHEAST SIDES OF HYDROELECTRIC POWER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    OBLIQUE VIEW OF SOUTHWEST AND SOUTHEAST SIDES OF HYDROELECTRIC POWER HOUSE WITH DAM TO LEFT OF HYDROELECTRIC POWER HOUSE AND ENTRANCE TO OLD LOCK CHAMBER ON RIGHT, VIEW TOWARDS NORTH - St. Lucie Canal, Lock No. 1, Hydroelectric Power House, St. Lucie, Cross State Canal, Okeechobee Intracoastal Waterway, Stuart, Martin County, FL

  10. A species' Odyssey: evolution of obstetrical mechanics from Australopithecus Lucy to nowadays.

    PubMed

    Chene, G; Tardieu, A-S; Trombert, B; Amouzougan, A; Lamblin, G; Mellier, G; Coppens, Y

    2014-10-01

    Study of obstetrical mechanics of Australopithecus Lucy, Homo neanderthalensis and Homo erectus relative to modern Homo sapiens and the Catarrhines. The material comprised a total of 360 pelves: 3 fossil pelves reconstructed using casts (Australopithecus afarensis Lucy or AL 288-1, Homo erectus KNM-WT 15000, H. neanderthalensis or Kebara 2), 305 female modern adult pelves and 52 female Catarrhine pelves (29 gorillas, 18 chimpanzees, 5 orang-utans). All these pelves were reconstructed in order to carry out 11 pelvimetric measurements. Each measurement was carried out twice and by two different operators. The pelvis of Lucy was platypelloid at each pelvic plane. The pelvic inlet of H. neanderthalensis was anteroposteriorly oval whereas the midplane and the outlet were transversely oval. The pelvis of H. erectus was globally round. In modern women, the inlet was transversely oval. The pelvic midplane and outlet were anteroposteriorly oval. In the great apes, the shape of all three pelvic planes was anteroposteriorly oval. The discriminating value of the various pelvimetry measurements place Australopithecus Lucy, H. neanderthalensis Kebara 2, and H. erectus KNM-WT 15000 close to modern humans and less similar to the great apes. Obstetrical mechanics evolved from dystocic delivery with a transverse orientation in Australopithecus to delivery with a modern human-like rotational birth and an increase in the anteroposterior diameters in H. erectus, H. neanderthalensis and modern H. sapiens. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. A Cleavable N-Terminal Signal Peptide Promotes Widespread Olfactory Receptor Surface Expression in HEK293T Cells

    PubMed Central

    Shepard, Blythe D.; Natarajan, Niranjana; Protzko, Ryan J.; Acres, Omar W.; Pluznick, Jennifer L.

    2013-01-01

    Olfactory receptors (ORs) are G protein-coupled receptors that detect odorants in the olfactory epithelium, and comprise the largest gene family in the genome. Identification of OR ligands typically requires OR surface expression in heterologous cells; however, ORs rarely traffic to the cell surface when exogenously expressed. Therefore, most ORs are orphan receptors with no known ligands. To date, studies have utilized non-cleavable rhodopsin (Rho) tags and/or chaperones (i.e. Receptor Transporting Protein, RTP1S, Ric8b and Gαolf) to improve surface expression. However, even with these tools, many ORs still fail to reach the cell surface. We used a test set of fifteen ORs to examine the effect of a cleavable leucine-rich signal peptide sequence (Lucy tag) on OR surface expression in HEK293T cells. We report here that the addition of the Lucy tag to the N-terminus increases the number of ORs reaching the cell surface to 7 of the 15 ORs (as compared to 3/15 without Rho or Lucy tags). Moreover, when ORs tagged with both Lucy and Rho were co-expressed with previously reported chaperones (RTP1S, Ric8b and Gαolf), we observed surface expression for all 15 receptors examined. In fact, two-thirds of Lucy-tagged ORs are able to reach the cell surface synergistically with chaperones even when the Rho tag is removed (10/15 ORs), allowing for the potential assessment of OR function with only an 8-amino acid Flag tag on the mature protein. As expected for a signal peptide, the Lucy tag was cleaved from the mature protein and did not alter OR-ligand binding and signaling. Our studies demonstrate that widespread surface expression of ORs can be achieved in HEK293T cells, providing promise for future large-scale deorphanization studies. PMID:23840901

  12. 77 FR 26793 - Florida Power and Light Company, St. Lucie Plant, Unit No. 2, Exemption

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... NUCLEAR REGULATORY COMMISSION [Docket No. 50-389; NRC-2011-0194] Florida Power and Light Company, St. Lucie Plant, Unit No. 2, Exemption 1.0 Background The Florida Power & Light Company (FPL, the licensee) is the holder of Renewed Facility Operating License No. NPF-16, which authorizes operation of St...

  13. Meeting the Challenge of Systemic Change in Geography Education: Lucy Sprague Mitchell's Young Geographers

    ERIC Educational Resources Information Center

    Downs, Roger M.

    2016-01-01

    The history of K-12 geography education has been characterized by recurrent high hopes and dashed expectations. There have, however, been moments when the trajectory of geography education might have changed to offer students the opportunity to develop a thorough working knowledge of geography. Lucy Sprague Mitchell's geography program developed…

  14. The Lucy Calkins Project: Parsing a Self-Proclaimed Literacy Guru

    ERIC Educational Resources Information Center

    Feinberg, Barbara

    2007-01-01

    This article discusses the work of Lucy McCormick Calkins, an educator and the visionary founding director of Teachers College Reading and Writing Project. Begun in 1981, the think tank and teacher training institute has since trained hundreds of thousands of educators across the country. Calkins is one of the original architects of the…

  15. Were Australopithecines Ape-Human Intermediates or Just Apes? A Test of Both Hypotheses Using the "Lucy" Skeleton

    ERIC Educational Resources Information Center

    Senter, Phil

    2010-01-01

    Mainstream scientists often claim that australopithecines such as the specimen nicknamed "Lucy" exhibit anatomy intermediate between that of apes and that of humans and use this as evidence that humans evolved from australopithecines, which evolved from apes. On the other hand, creationists reject evolution and claim that australopithecines are…

  16. Linguistic Relativity in Japanese and English: Is Language the Primary Determinant in Object Classification?

    ERIC Educational Resources Information Center

    Mazuka, Reiko; Friedman, Ronald S.

    2000-01-01

    Tested claims by Lucy (1992a, 1992b) that differences between the number marking systems used by Yucatec Maya and English lead speakers of these languages to differentially attend to either the material composition or the shape of objects. Replicated Lucy's critical objects' classification experiments using speakers of English and Japanese.…

  17. View west of the James and Lucy Alexander gravestone and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View west of the James and Lucy Alexander gravestone and family plot among other demarcated family plots in the Female Union Band Cemetery. - Mount Zion Cemetery/ Female Union Band Cemetery, Bounded by 27th Street right-of-way N.W. (formerly Lyons Mill Road), Q Street N.W., & Mill Road N.W., Washington, District of Columbia, DC

  18. Whose Language Is Legit? Intersections of Race, Ethnicity, and Language

    ERIC Educational Resources Information Center

    Zisselsberger, Margarita; Collins, Kristina

    2016-01-01

    This case describes St. Lucy School, a K-8 elementary school in a mid-sized urban center. St. Lucy has traditionally served African American students. In the past 10 years, the neighborhood has experienced a significant shift in population, such that many Latino/a families are now entering the school. In response to these changes, the school…

  19. 77 FR 40092 - License Amendment To Increase the Maximum Reactor Power Level, Florida Power & Light Company, St...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... Increase the Maximum Reactor Power Level, Florida Power & Light Company, St. Lucie, Units 1 and 2 AGENCY... amendment for Renewed Facility Operating License Nos. DPR-67 and NPF-16, issued to Florida Power & Light... St. Lucie County, Florida. The proposed license amendment would increase the maximum thermal power...

  20. Numerical simulations of imaging satellites with optical interferometry

    NASA Astrophysics Data System (ADS)

    Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong

    2015-08-01

    Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.

  1. Photometric Properties of Network and faculae derived by HMI data compensated for scattered-light

    NASA Astrophysics Data System (ADS)

    Criscuoli, Serena; Norton, Aimee Ann; Whitney, Taylor

    2017-08-01

    We report on the photometric properties of faculae and network as observed in full-disk,scattered-light corrected images from the Helioseismic Magnetic Imager (HMI). We usea Lucy-Richardson deconvolution routine that corrects a full-disk intensity image in lessthan one second. Faculae are distinguished from network through proximity to activeregions in addition to continuum intensity and magnetogram thresholds. This is the firstreport that full-disk image data, including center-to-limb variations, reproduce the photometric properties of faculae and network observed previously only in sub-arcsecond resolution, small field-of-view studies, i.e. that network exhibit in general higher photometric contrasts. More specifically, for magnetic flux values larger than approximately 300 G, the network is always brighter than faculae and the contrast differences increases toward the limb, where the network contrast is about twice the facular one. For lower magnetic flux values, pixels in network regions appear always darker than facular ones. Contrary to reports from previous full-disk observations, we also found that network exhibits a higher center-to-limb variation. Our results are in agreement with reports from simulations that indicate magnetic flux alone is a poor proxy of the photometric properties of magnetic features. We estimate that the facular and network contribution to irradiance variability of the current Cycle 24 is overestimated by at least 11% due to the photometric properties of network and faculae not being recognized as distinctly different.

  2. Astrometric observations of Phobos with the SRC on Mars Express. New data and comparison of different measurement techniques

    NASA Astrophysics Data System (ADS)

    Pasewaldt, A.; Oberst, J.; Willner, K.; Beisembin, B.; Hoffmann, H.; Matz, K. D.; Roatsch, T.; Michael, G.; Cardesín-Moinelo, A.; Zubarev, A. E.

    2015-08-01

    Aims: From April 2008 to August 2011 Mars Express carried out 74 Phobos flybys at distances between 669 and 5579 km. Images taken with the Super Resolution Channel (SRC) were used to determine the spacecraft-centered right ascension and declination of this Martian moon. Methods: Image positions of Phobos were measured using the limb-fit and control-point measurement techniques. Camera pointing and pointing drift were controlled by means of background star observations that were compared to corresponding positions from reference catalogs. Blurred and noisy images were restored by applying an image-based point spread function in a Richardson-Lucy deconvolution. Results: Here, we report on a set of 158 Phobos astrometric observations with estimated accuracies between 0.224 and 3.405 km circular w.r.t. the line of sight to the satellite. Control point measurements yield slightly more accurate results than the limb fit ones. Our observations are in good agreement with the current Phobos ephemerides by the Jet Propulsion Laboratory (JPL) and the Royal Observatory of Belgium (ROB) with mean offsets of up to 335 m. Our data can be used for the maintenance and update of these models. Tables A.1 and A.2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/580/A28

  3. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  4. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  5. Diffusion of Ideas by 19th Century Feminists: The Growth of Women's Magazines.

    ERIC Educational Resources Information Center

    Jolliffe, Lee

    The communications of suffragist Lucy Stone illustrate the changes that the growth of women's magazines brought to nineteenth century feminists. As indicated in letters to friends and family, Lucy Stone became an active proponent of women's rights at a time when public speaking tours were the best means of reaching a wide audience. As the printing…

  6. Emma Willard Students Illustrate New Children's Book about Feisty Female Characters

    ERIC Educational Resources Information Center

    Gross, Karen

    2016-01-01

    "Lady Lucy's Quest" (Shires Press) is a story about a feisty young girl in the Middle Ages who wants to become a Knight of the Round Table. The initial reaction of her family, the townspeople, and the knights is, stated simply, "no way." But Lucy perseveres and meets the three challenges of knighthood presented to her, albeit…

  7. 75 FR 73134 - Florida Power and Light Company, St. Lucie Plant, Units 1 and 2; Environmental Assessment and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... proposed action does not involve any physical changes to the reactor, fuel, plant, structures, support structures, water, or land at the St. Lucie Plant, Units 1 and 2, site. The proposed action is in accordance... Impact Statement for License Renewal of Nuclear Power Plants'' (GEIS). Supplement 11 of the GEIS, issued...

  8. Situation Comedy, Feminism and Freud: Discourses of Gracie and Lucy.

    ERIC Educational Resources Information Center

    Mellencamp, Patricia

    This paper is based on a general analysis of 40 episodes of The George Burns and Gracie Allen Show and 170 (of 179) episodes of I Love Lucy, both of which were aired on television during the 1950s. Character portrayals of the stars and supporting actors/actresses are described in detail and analyzed from the perspectives of gender and sex…

  9. The Business of Being Dean of Women: A Letter from Lucy Diggs Slowe to Howard University Board of Trustees.

    ERIC Educational Resources Information Center

    Bell-Scott, Patricia

    1991-01-01

    Describes background of Lucy Diggs Slowe, high school teacher, junior high school principal, dean of women, and writer who was important philosopher and advocate of African American women's higher education of her day. Presents unedited letter written by Slowe to Howard University Board of Trustees in which she outlines incidents detrimental to…

  10. UV Spectroscopy of Lucy Mission Targets

    NASA Astrophysics Data System (ADS)

    Thomas, Cristina

    2017-08-01

    The Trojan asteroids are a significant population of primitive bodies trapped in Jupiter's stable L4 and L5 Lagrange regions. Their physical properties and existence in these particular orbits constrain the chemical and dynamical processes in our early Solar System. NASA's recently selected Lucy mission will perform the first reconnaissance of these asteroids and will answer many fundamental questions about the population. The compositions of the Trojans are not well understood. Spectroscopy and spectrophotometry in visible and near-infrared wavelengths show red slopes (spectra with reflectivity increasing towards the long wavelength end of the spectrum) and no diagnostic spectral absorption features. However, past spectral and photometric observations suggest there are unobserved features in ultraviolet wavelengths. We propose to obtain ultraviolet spectroscopy with WFC3 of four Trojan asteroids that are targets of the Lucy mission. Lucy will not have the capability to obtain ultraviolet spectra. The proposed observations can only be made using Hubble. We will determine if there are UV spectral features, as suggested by visible wavelength observations, and connect these features to candidate compositional components. These observations will enable connections between the compositions of Trojans and dynamical models of the early Solar System.

  11. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    PubMed

    Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob

    2011-03-01

    We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  12. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  13. Improving predictions of the effects of extreme events, land use, and climate change on the hydrology of watersheds in the Philippines

    NASA Astrophysics Data System (ADS)

    Benavidez, Rubianca; Jackson, Bethanna; Maxwell, Deborah; Paringit, Enrico

    2016-05-01

    Due to its location within the typhoon belt, the Philippines is vulnerable to tropical cyclones that can cause destructive floods. Climate change is likely to exacerbate these risks through increases in tropical cyclone frequency and intensity. To protect populations and infrastructure, disaster risk management in the Philippines focuses on real-time flood forecasting and structural measures such as dikes and retaining walls. Real-time flood forecasting in the Philippines mostly utilises two models from the Hydrologic Engineering Center (HEC): the Hydrologic Modeling System (HMS) for watershed modelling, and the River Analysis System (RAS) for inundation modelling. This research focuses on using non-structural measures for flood mitigation, such as changing land use management or watershed rehabilitation. This is being done by parameterising and applying the Land Utilisation and Capability Indicator (LUCI) model to the Cagayan de Oro watershed (1400 km2) in southern Philippines. The LUCI model is capable of identifying areas providing ecosystem services such as flood mitigation and agricultural productivity, and analysing trade-offs between services. It can also assess whether management interventions could enhance or degrade ecosystem services at fine spatial scales. The LUCI model was used to identify areas within the watershed that are providing flood mitigating services and areas that would benefit from management interventions. For the preliminary comparison, LUCI and HEC-HMS were run under the same scenario: baseline land use and the extreme rainfall event of Typhoon Bopha. The hydrographs from both models were then input to HEC-RAS to produce inundation maps. The novelty of this research is two-fold: (1) this type of ecosystem service modelling has not been carried out in the Cagayan de Oro watershed; and (2) this is the first application of the LUCI model in the Philippines. Since this research is still ongoing, the results presented in this paper are preliminary. As the land use and soil parameterisation for this watershed are refined and more scenarios are run through the model, more robust comparisons can be made between the hydrographs produced by LUCI and HEC-HMS and how those differences affect the inundation map produced by HEC-RAS.

  14. Hydrologic data summary for the St. Lucie River Estuary, Martin and St. Lucie Counties, Florida, 1998-2001

    USGS Publications Warehouse

    Byrne, Michael J.; Patino, Eduardo

    2004-01-01

    A hydrologic analysis was made at three canal sites and four tidal sites along the St. Lucie River Estuary in southeastern Florida from 1998 to 2001. The data included for analysis are stage, 15-minute flow, salinity, water temperature, turbidity, and suspended-solids concentration. During the period of record, the estuary experienced a drought, major storm events, and high-water discharge from Lake Okeechobee. Flow mainly occurred through the South Fork of the St. Lucie River; however, when flow increased through control structures along the C-23 and C-24 Canals, the North Fork was a larger than usual contributor of total freshwater inflow to the estuary. At one tidal site (Steele Point), the majority of flow was southward toward the St. Lucie Inlet; at a second tidal site (Indian River Bridge), the majority of flow was northward into the Indian River Lagoon. Large-volume stormwater discharge events greatly affected the St. Lucie River Estuary. Increased discharge typically was accompanied by salinity decreases that resulted in water becoming and remaining fresh throughout the estuary until the discharge events ended. Salinity in the estuary usually returned to prestorm levels within a few days after the events. Turbidity decreased and salinity began to increase almost immediately when the gates at the control structures closed. Salinity ranged from less than 1 to greater than 35 parts per thousand during the period of record (1998-2001), and typically varied by several parts per thousand during a tidal cycle. Suspended-solids concentrations were observed at one canal site (S-80) and two tidal sites (Speedy Point and Steele Point) during a discharge event in April and May 2000. Results suggest that most deposition of suspended-solids concentration occurs between S-80 and Speedy Point. The turbidity data collected also support this interpretation. The ratio of inorganic to organic suspended-solids concentration observed at S-80, Speedy Point, and Steele Point during the discharge event indicates that most flocculation of suspended-solids concentration occurs between Speedy Point and Steele Point.

  15. Nitrogen limitation, toxin synthesis potential, and toxicity of cyanobacterial populations in Lake Okeechobee and the St. Lucie River Estuary, Florida, during the 2016 state of emergency event.

    PubMed

    Kramer, Benjamin J; Davis, Timothy W; Meyer, Kevin A; Rosen, Barry H; Goleski, Jennifer A; Dick, Gregory J; Oh, Genesok; Gobler, Christopher J

    2018-01-01

    Lake Okeechobee, FL, USA, has been subjected to intensifying cyanobacterial blooms that can spread to the adjacent St. Lucie River and Estuary via natural and anthropogenically-induced flooding events. In July 2016, a large, toxic cyanobacterial bloom occurred in Lake Okeechobee and throughout the St. Lucie River and Estuary, leading Florida to declare a state of emergency. This study reports on measurements and nutrient amendment experiments performed in this freshwater-estuarine ecosystem (salinity 0-25 PSU) during and after the bloom. In July, all sites along the bloom exhibited dissolved inorganic nitrogen-to-phosphorus ratios < 6, while Microcystis dominated (> 95%) phytoplankton inventories from the lake to the central part of the estuary. Chlorophyll a and microcystin concentrations peaked (100 and 34 μg L-1, respectively) within Lake Okeechobee and decreased eastwards. Metagenomic analyses indicated that genes associated with the production of microcystin (mcyE) and the algal neurotoxin saxitoxin (sxtA) originated from Microcystis and multiple diazotrophic genera, respectively. There were highly significant correlations between levels of total nitrogen, microcystin, and microcystin synthesis gene abundance across all surveyed sites (p < 0.001), suggesting high levels of nitrogen supported the production of microcystin during this event. Consistent with this, experiments performed with low salinity water from the St. Lucie River during the event indicated that algal biomass was nitrogen-limited. In the fall, densities of Microcystis and concentrations of microcystin were significantly lower, green algae co-dominated with cyanobacteria, and multiple algal groups displayed nitrogen-limitation. These results indicate that monitoring and regulatory strategies in Lake Okeechobee and the St. Lucie River and Estuary should consider managing loads of nitrogen to control future algal and microcystin-producing cyanobacterial blooms.

  16. Another Vision of Progressivism: Marion Richardson's Triumph and Tragedy.

    ERIC Educational Resources Information Center

    Smith, Peter

    1996-01-01

    Profiles the career and contributions of English art teacher Marion Richardson (1892-1946). A dynamic and assertive woman, Richardson's ideas and practices changed British primary and secondary art teaching for many years. She often used "word pictures" (narrative descriptions of scenes or emotions) to inspire her students. (MJP)

  17. "Anne of Green Gables": A One-Act Musical Based on Lucy Maud Montgomery's Novel. Cue Sheet for Teachers.

    ERIC Educational Resources Information Center

    Flynn, Rosalind

    This performance guide is designed for teachers to use with students before and after a performance of the one-act musical based on Lucy Maud Montgomery's novel, "Anne of Green Gables," with music by Richard DeRosa and book and lyrics by Greg Gunning. The guide is designed to help teachers foster students' appreciation of theatre, dance,…

  18. A forest transect of pine mountain, Kentucky: changes since E. Lucy Braun and chestnut blight

    Treesearch

    Tracy S. Hawkins

    2006-01-01

    In 1997, forest composition and structure were determined for Hi Lewis Pine Barrens State Nature Preserve, a 68-ha tract on the south slope of Pine Mountain, Harlan County, Kentucky. Data collected from 28 0.04-ha plots were used to delineate forest types. Percent canopy compositions were compared with those reported by Dr. E. Lucy Braun prior to the peak of chestnut...

  19. [Formula: see text]-regularized recursive total least squares based sparse system identification for the error-in-variables.

    PubMed

    Lim, Jun-Seok; Pang, Hee-Suk

    2016-01-01

    In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.

  20. Marion Richardson: "Art and the Child," a Forgotten Classic

    ERIC Educational Resources Information Center

    Armstrong, Michael

    2015-01-01

    Marion Richardson was a revolutionary art teacher and schools inspector. First published in 1948, her book "Art and the Child" is one of the most remarkable educational documents of the period between the first and second world wars. This article reviews Richardson's philosophy and practice of art and suggests its continuing…

  1. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  2. Perimortem fractures in Lucy suggest mortality from fall out of tall tree.

    PubMed

    Kappelman, John; Ketcham, Richard A; Pearce, Stephen; Todd, Lawrence; Akins, Wiley; Colbert, Matthew W; Feseha, Mulugeta; Maisano, Jessica A; Witzel, Adrienne

    2016-09-22

    The Pliocene fossil 'Lucy' (Australopithecus afarensis) was discovered in the Afar region of Ethiopia in 1974 and is among the oldest and most complete fossil hominin skeletons discovered. Here we propose, on the basis of close study of her skeleton, that her cause of death was a vertical deceleration event or impact following a fall from considerable height that produced compressive and hinge (greenstick) fractures in multiple skeletal elements. Impacts that are so severe as to cause concomitant fractures usually also damage internal organs; together, these injuries are hypothesized to have caused her death. Lucy has been at the centre of a vigorous debate about the role, if any, of arboreal locomotion in early human evolution. It is therefore ironic that her death can be attributed to injuries resulting from a fall, probably out of a tall tree, thus offering unusual evidence for the presence of arborealism in this species.

  3. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  4. Photometric Properties of Network and Faculae Derived from HMI Data Compensated for Scattered Light

    NASA Astrophysics Data System (ADS)

    Criscuoli, Serena; Norton, Aimee; Whitney, Taylor

    2017-10-01

    We report on the photometric properties of faculae and network, as observed in full-disk, scattered-light-corrected images from the Helioseismic Magnetic Imager. We use a Lucy-Richardson deconvolution routine that corrects an image in less than one second. Faculae are distinguished from network through proximity to active regions. This is the first report that full-disk observations, including center-to-limb variations, reproduce the photometric properties of faculae and network observed previously only in sub-arcsecond-resolution; small field-of-view studies, I.e. that network, as defined by distance from active regions, exhibit higher photometric contrasts. Specifically, for magnetic flux values larger than approximately 300 G, the network is brighter than faculae and the contrast differences increase toward the limb, where the network contrast is about twice the facular one. For lower magnetic flux values, network appear darker than faculae. Contrary to reports from previous full-disk observations, we also found that network exhibits a higher center-to-limb variation. Our results are in agreement with reports from simulations that indicate magnetic flux alone is a poor proxy of the photometric properties of magnetic features. We estimate that the contribution of faculae and network to Total Solar Irradiance variability of the current Cycle 24 is overestimated by at least 11%, due to the photometric properties of network and faculae not being recognized as different. This estimate is specific to the method employed in this study to reconstruct irradiance variations, so caution should be paid when extending it to other techniques.

  5. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  6. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  7. The contributions of Lewis Fry Richardson to drainage theory, soil physics, and the soil-plant-atmosphere continuum

    NASA Astrophysics Data System (ADS)

    Knight, John; Raats, Peter

    2016-04-01

    The EGU Division on Nonlinear Processes in Geophysics awards the Lewis Fry Richardson Medal. Richardson's significance is highlighted in http://www.egu.eu/awards-medals/portrait-lewis-fry-richardson/, but his contributions to soil physics and to numerical solutions of heat and diffusion equations are not mentioned. We would like to draw attention to those little known contributions. Lewis Fry Richardson (1881-1953) made important contributions to many fields including numerical weather prediction, finite difference solutions of partial differential equations, turbulent flow and diffusion, fractals, quantitative psychology and studies of conflict. He invented numerical weather prediction during World War I, although his methods were not successfully applied until 1950, after the invention of fast digital computers. In 1922 he published the book `Numerical weather prediction', of which few copies were sold and even fewer were read until the 1950s. To model heat and mass transfer in the atmosphere, he did much original work on turbulent flow and defined what is now known as the Richardson number. His technique for improving the convergence of a finite difference calculation is known as Richardson extrapolation, and was used by John Philip in his 1957 semi-analytical solution of the Richards equation for water movement in unsaturated soil. Richardson's first papers in 1908 concerned the numerical solution of the free surface problem of unconfined flow of water in saturated soil, arising in the design of drain spacing in peat. Later, for the lower boundary of his atmospheric model he needed to understand the movement of heat, liquid water and water vapor in what is now called the vadose zone and the soil plant atmosphere system, and to model coupled transfer of heat and flow of water in unsaturated soil. Finding little previous work, he formulated partial differential equations for transient, vertical flow of liquid water and for transfer of heat and water vapor. He paid considerable attention to the balances of water and energy at the soil-atmosphere and plant-atmosphere interfaces, making use of the concept of transfer resistance introduced by Brown and Escombe (1900) for leaf-atmosphere interfaces. He incorporated finite difference versions of all equations into his numerical weather forecasting model. From 1916, Richardson drove an ambulance in France in World War I, did weather computations in his spare time, and wrote a draft of his book. Later researchers such as L.A. Richards, D.A. de Vries and J.R. Philip from the 1930s to the 1950s were unaware that Richardson had anticipated many of their ideas on soil liquid water, heat, water vapor, and the soil-plant-atmosphere system. The Richards (1931) equation could rightly be called the Richardson (1922) equation! Richardson (1910) developed what we now call the Crank Nicolson implicit method for the heat or diffusion equation. To save effort, he used an explicit three level method after the first time step. Crank and Nicolson (1947) pointed out the instability in the explicit method, and used his implicit method for all time steps. Hanks and Bowers (1962) adapted the Crank Nicolson method to solve the Richards equation. So we could say that Hanks and Bowers used the Richardson finite difference method to solve the Richardson equation for soil water flow!

  8. Experimental Investigation of the Behavior of Sub-Grid Scale Motions in Turbulent Shear Flow

    NASA Technical Reports Server (NTRS)

    Cantwell, Brian

    1992-01-01

    Experiments have been carried out on a vertical jet of helium issuing into a co-flow of air at a fixed exit velocity ratio of 2.0. At all the experimental conditions studied, the flow exhibits a strong self excited periodicity. The natural frequency behavior of the jet, the underlying fine-scale flow structure, and the transition to turbulence have been studied over a wide range of flow conditions. The experiments were conducted in a variable pressure facility which made it possible to vary the Reynolds number and Richardson number independently. A stroboscopic schlieren system was used for flow visualization and single-component Laser Doppler Anemometry was used to measure the axial component of velocity. The flow exhibits several interesting features. The presence of co-flow eliminates the random meandering typical of buoyant plumes in a quiescent environment and the periodicity of the helium jet under high Richardson number conditions is striking. Under these conditions transition to turbulence consists of a rapid but highly structured and repeatable breakdown and intermingling of jet and freestream fluid. At Ri = 1.6 the three-dimensional structure of the flow is seen to repeat from cycle to cycle. The point of transition moves closer to the jet exit as either the Reynolds number or the Richardson number increases. The wavelength of the longitudinal instability increases with Richardson number. At low Richardson numbers, the natural frequency scales on an inertial time scale. At high Richardson number the natural frequency scales on a buoyancy time scale. The transition from one flow regime to another occurs over a narrow range of Richardson numbers from 0.7 to 1. A buoyancy Strouhal number is used to correlate the high Richardson number frequency behavior.

  9. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  10. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism.

    PubMed

    Reno, Philip L; Lovejoy, C Owen

    2015-01-01

    Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or "Lucy" and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or "Kadanuumuu") can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen's contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.

  11. Cyanobacteria of the 2016 Lake Okeechobee and Okeechobee Waterway harmful algal bloom

    USGS Publications Warehouse

    Rosen, Barry H.; Davis, Timothy W.; Gobler, Christopher J.; Kramer, Benjamin J.; Loftin, Keith A.

    2017-05-31

    The Lake Okeechobee and the Okeechobee Waterway (Lake Okeechobee, the St. Lucie Canal and River, and the Caloosahatchee River) experienced an extensive harmful algal bloom within Lake Okeechobee, the St. Lucie Canal and River and the Caloosahatchee River in 2016. In addition to the very visible bloom of the cyanobacterium Microcystis aeruginosa, several other cyanobacteria were present. These other species were less conspicuous; however, they have the potential to produce a variety of cyanotoxins, including anatoxins, cylindrospermopsins, and saxitoxins, in addition to the microcystins commonly associated with Microcystis. Some of these species were found before, during, and 2 weeks after the large Microcystis bloom and could provide a better understanding of bloom dynamics and succession. This report provides photographic documentation and taxonomic assessment of the cyanobacteria present from Lake Okeechobee and the Caloosahatchee River and St. Lucie Canal, with samples collected June 1st from the Caloosahatchee River and Lake Okeechobee and in July from the St. Lucie Canal. The majority of the images were of live organisms, allowing their natural complement of pigmentation to be captured. The report provides a digital image-based taxonomic record of the Lake Okeechobee and the Okeechobee Waterway microscopic flora. It is anticipated that these images will facilitate current and future studies on this system, such as understanding the timing of cyanobacteria blooms and their potential toxin production.

  12. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  13. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  14. Reynolds number dependence of relative dispersion statistics in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Sawford, Brian L.; Yeung, P. K.; Hackl, Jason F.

    2008-06-01

    Direct numerical simulation results for a range of relative dispersion statistics over Taylor-scale Reynolds numbers up to 650 are presented in an attempt to observe and quantify inertial subrange scaling and, in particular, Richardson's t3 law. The analysis includes the mean-square separation and a range of important but less-studied differential statistics for which the motion is defined relative to that at time t =0. It seeks to unambiguously identify and quantify the Richardson scaling by demonstrating convergence with both the Reynolds number and initial separation. According to these criteria, the standard compensated plots for these statistics in inertial subrange scaling show clear evidence of a Richardson range but with an imprecise estimate for the Richardson constant. A modified version of the cube-root plots introduced by Ott and Mann [J. Fluid Mech. 422, 207 (2000)] confirms such convergence. It has been used to yield more precise estimates for Richardson's constant g which decrease with Taylor-scale Reynolds numbers over the range of 140-650. Extrapolation to the large Reynolds number limit gives an asymptotic value for Richardson's constant in the range g =0.55-0.57, depending on the functional form used to make the extrapolation.

  15. Medical women of the West.

    PubMed Central

    Scully, A L

    1988-01-01

    The presence in the West of women physicians with degrees from regular medical schools spans a period of approximately 130 years. Women's Medical College of Pennsylvania graduated many of these early women physicians. The first woman medical graduate of a western school was Lucy M. Field Wanzer, who finished in 1876 at the Department of Medicine, University of California in San Francisco. Soon thereafter, schools that would become Stanford University and the Oregon Health Sciences University schools of medicine, as well as the newly founded University of Southern California, were contributing to the pool of women physicians. The University of Michigan Medical School, the first coeducational state medical school, also educated some of the western women physicians, who by 1910 numbered about 155. This regional account of the progress of women physicians as they strove to become an integral part of the profession emphasizes the familiar themes of altruism, ingenuity, and perseverance that characterized their efforts. Images PMID:3074578

  16. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  17. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  18. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  19. 13. 'WAITING AT THE DRAWBRIDGE.' THE COAL SCHOONER LUCY MAY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. 'WAITING AT THE DRAWBRIDGE.' THE COAL SCHOONER LUCY MAY WAITING AT THE DRAW, JUNE 19, 1896. Photocopy of photograph (original glass plate negative #T89 in the collection of the Annisquam Historical Society, Annisquam, Massachusetts). Photographer: Martha Harvey (1862-1949). (The handwritten legend along the top edge of the photograph is scratched in the emulsion of the original glass plate negative. Consequently it reads in reverse when printed.) - Annisquam Bridge, Spanning Lobster Cove between Washington & River Streets, Gloucester, Essex County, MA

  20. Emma Lucy Braun's forest plots in eastern North America.

    PubMed

    Ricklefs, Robert E

    2018-02-01

    Relative abundances of tree species are presented for the 348 forest plots described in E. Lucy Braun's (1950) book, Deciduous Forests of Eastern North America (Hafner, New York, facsimile reprint 1972). Information about the plots includes forest type, location with latitude and longitude, WorldClim climate variables, and sources of original studies where applicable. No copyright restrictions are associated with the use of this data set. Please cite this article when the data are used in other publications. © 2017 by the Ecological Society of America.

  1. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    PubMed

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  2. Concentration Measurements in Self-Excited, Momentum-Dominated Helium Jets

    NASA Technical Reports Server (NTRS)

    Yildirim, Bekir Sedat

    2004-01-01

    Flow structure of momentum-dominated pure helium jets discharged vertically into ambient air was investigated using high-speed rainbow schlieren deflectometry (RSD) technique. Effects of the operating parameters, i.e., Reynolds number (Re) and Richardson number (Ri), on the oscillatory behavior of the flow were examined over a range of experimental conditions. To seek the individual effect of these parameters, one of them was fixed and the other was varied with certain constraints. Measurements revealed highly periodic oscillations in the laminar region as well as high regularity in transition and turbulent regions. Maximum spectral power profiles at different axial locations indicated the oscillation amplitude increasing until the breakdown of the jet in the turbulent regime. The transition from the laminar to turbulent flow was also investigated. Fast Fourier transform analysis performed in the transition regime showed that the flow oscillates at a unique frequency, which was the same in the upstream laminar flow region. Measured deflection angle data were used in Abel inversion algorithm to construct the helium concentration fields. Instantaneous helium concentration contours revealed changes in the flow structure and evolution of vortical structures during an oscillation cycle. Temporal evolution plots of helium concentration at different axial location showed repeatable oscillations at all axial and radial locations up to the turbulent regime. A cross-correlation technique, applied to find the spatial displacements of the vortical structures, provided correlation coefficient peaks between consecutive schlieren images. Results show that the vortical structure convected and accelerated only in the axial direction.

  3. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  4. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  5. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  6. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  7. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  9. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  10. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  11. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  12. Toxin composition of the 2016 Microcystis aeruginosa bloom in the St. Lucie Estuary, Florida.

    PubMed

    Oehrle, Stuart; Rodriguez-Matos, Marliette; Cartamil, Michael; Zavala, Cristian; Rein, Kathleen S

    2017-11-01

    A bloom of the cyanobacteria, Microcystis aeruginosa occurred in the St. Lucie Estuary during the summer of 2016, stimulated by the release of waters from Lake Okeechobee. This cyanobacterium produces the microcystins, a suite of heptapeptide hepatotoxins. The toxin composition of the bloom was analyzed and was compared to an archived bloom sample from 2005. Microcystin-LR was the most abundant toxin with lesser amounts of microcystin variants. Nodularin, cylindrospermopsin and anatoxin-a were not detected. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  15. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  16. Perturbations of the Richardson number field by gravity waves

    NASA Technical Reports Server (NTRS)

    Wurtele, M. G.; Sharman, R. D.

    1985-01-01

    An analytic solution is presented for a stratified fluid of arbitrary constant Richardson number. By computer aided analysis the perturbation fields, including that of the Richardson number can be calculated. The results of the linear analytic model were compared with nonlinear simulations, leading to the following conclusions: (1) the perturbations in the Richardson number field, when small, are produced primarily by the perturbations of the shear; (2) perturbations of in the Richardson number field, even when small, are not symmetric, the increase being significantly larger than the decrease (the linear analytic solution and the nonlinear simulations both confirm this result); (3) as the perturbations grow, this asymmetry increases, but more so in the nonlinear simulations than in the linear analysis; (4) for large perturbations of the shear flow, the static stability, as represented by N2, is the dominating mechanism, becoming zero or negative, and producing convective overturning; and (5) the convectional measure of linearity in lee wave theory, NH/U, is no longer the critical parameter (it is suggested that (H/u sub 0) (du sub 0/dz) takes on this role in a shearing flow).

  17. Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron

    2015-10-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  18. Advanced IMCW Lidar Techniques for ASCENDS CO2 Column Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel; lin, bing; nehrir, amin; harrison, fenton; obland, michael

    2015-04-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation.

  19. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron

    2015-01-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  20. Revolutions in Science and Technology: Future Threats to US National Security

    DTIC Science & Technology

    2011-04-01

    34Ultrafast ytterbium-doped bulk lasers and laser amplifiers, " Applied Physics B. Vol. 69, 1999. pp. 3-17. 47 See Martin Richardson et al. page 15...breakout and surprise 53 Martin Richardson, Timothy McComb, and Vikas Sudesh, "High Power Fiber Lasers and Applications to Manufacturing," Conference...Journal of Applied Physics, Vol. 49, 2010. ss1n 2008, Martin Richardson, et al. stated that the "high power fiber laser market, currently estimated to

  1. Effects of Mixed Layer Shear on Vertical Heat Flux

    DTIC Science & Technology

    2016-12-01

    correlation of ice speed to heat flux (r = .312, p < .001). Relationships between ice speed and shear (r = .107, p < .001), ice speed and inverse ...Richardson number (r = .035, p = .256), inverse Richardson number and heat flux (r = .3, p < .001), heat content and heat flux (r = .084, p < .001) were...correlation of ice speed to heat flux (r = .312, p < .001). Relationships between ice speed and shear (r = .107, p < .001), ice speed and inverse Richardson

  2. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  3. Rotationally resolved colors of the targets of NASA's Lucy mission

    NASA Astrophysics Data System (ADS)

    Emery, Joshua; Mottola, Stefano; Brown, Mike; Noll, Keith; Binzel, Richard

    2018-05-01

    We propose rotationally resolved photometry at 3.6 and 4.5 um of 5 Trojan asteroids and one Main Belt asteroid - the targets of NASA's Lucy mission. The proposed Spitzer observations are designed to meet a combination of science goals and mission support objectives. Science goals 1) Search for signatures of volatiles and/or organics on the surfaces. a. This goal includes resolving a discrepancy between previous WISE and Spitzer measurements of Trojans 2) Provide new constraints on the cause of rotational spectral heterogeneity detected on 3548 Eurybates at shorter wavelengths a. Determine whether the heterogeneity (Fig 1) extends to the 3-5 um region 3) Assess the possibility for spectral heterogeneity on the other targets a. This goal will help test the hypothesis of Wong and Brown (2015) that the near-surface interiors of Trojans differ from their surfaces 4) Thermal data at 4.5 um for the Main Belt target Donaldjohanson will refine estimates of size, albedo, and provide the first estimate of thermal inertia Mission support objectives 1) Assess scientifically optimal encounter times (viewing geometries) for the fly-bys a. Characterizing rotational spectral units now will enable the team to choose the most scientifically valuable part of the asteroid to view 2) Gather data to optimize observing parameters for Lucy instruments a. Measuring brightness in the 3 - 5 um region and resolving the discrepancy between WISE and Spitzer will enable better planning of the Lucy spectral observations in this wavelength range 3) The size, albedo, and thermal inertia of Donaldjohanson are fundamental data for planning the encounter with that Main Belt asteroid

  4. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  5. Generalization Analysis of Fredholm Kernel Regularized Classifiers.

    PubMed

    Gong, Tieliang; Xu, Zongben; Chen, Hong

    2017-07-01

    Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.

  6. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  7. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  8. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  9. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  10. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A comparison of optimization algorithms for localized in vivo B0 shimming.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke

    2018-02-01

    To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. High-resolution seismic data regularization and wavefield separation

    NASA Astrophysics Data System (ADS)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  13. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  14. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  15. Sediment loads in canals 18, 23, and 24 in southeastern Florida

    USGS Publications Warehouse

    Pitt, William A. J.

    1971-01-01

    Suspended-sediment concentrations and suspended-sediment discharges were determined in selected canals in St. Lucie, Martin, and Palm Beach Counties, in southeastern Florida. Sediment rating curves were developed to relate water discharge to sediment concentration at the three sites sampled. An evaluation of the concentration and sediment loads shows that larger amounts of suspended sediment were being carried into the St. Lucie River estuary than were being carried into the Loxahatchee River estuary. Peat and muck soils in areas drained for agricultural planting and citrus cultivation are readily carried by runoff water into major canals that traverse the region.

  16. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  17. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  18. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  20. Prophet”or Professor? The Life and Work of Lewis Fry Richardson

    NASA Astrophysics Data System (ADS)

    Smagorinsky, Joseph

    This book focuses on a man who, in his lifetime, was scarcely known to the general public. Yet within certain circles, Richardson has had enormous impact within recent years. Although there are many scientists and humanists who exercise influence in their own respective fields, rarely do they bridge disciplines. It is this combination that has made Lewis Fry Richardson a figure worthy of a full-length biography, not just to record his contributions to each field but to provide an analysis and understanding of what motivated his diversity. In another age, Richardson would have been counted as a Renaissance man. He has variously been referred to as a chemist, physicist, mathematician, psychologist, meteorologist, economist, and biologist. In retrospect, he clearly was well ahead of his time, whether the subject in question was his work in numerical weather prediction or in war studies.

  1. Current status of the facility instrumentation suite at the Large Binocular Telescope Observatory

    NASA Astrophysics Data System (ADS)

    Rothberg, Barry; Kuhn, Olga; Edwards, Michelle L.; Hill, John M.; Thompson, David; Veillet, Christian; Wagner, R. Mark

    2016-07-01

    The current status of the facility instrumentation for the Large Binocular Telescope (LBT) is reviewed. The LBT encompasses two 8.4 meter primary mirrors on a single mount yielding an effective collecting area of 11.8 meters or 23 meters when interferometrically combined. The three facility instruments at LBT include: 1) the Large Binocular Cameras (LBCs), each with a 23'× 25' field of view (FOV). The blue optimized and red optimized optical wavelength LBCs are mounted at the prime focus of the SX (left) and DX (right) primary mirrors, respectively. Combined, the filter suite of the two LBCs cover 0.3-1.1 μm, including the addition of new medium-band filters centered on TiO (0.78 μm) and CN (0.82 μm) 2) the Multi-Object Double Spectrograph (MODS), two identical optical spectrographs each mounted at the straight through f/15 Gregorian focus of the primary mirrors. The capabilities of MODS-1 and -2 include imaging with Sloan filters (u, g, r, i, and z) and medium resolution (R ˜ 2000) spectroscopy, each with 24 interchangeable masks (multi-object or longslit) over a 6'× 6' FOV. Each MODS is capable of blue (0.32-0.6 μm) and red (0.5-1.05 μm) wavelength only spectroscopy coverage or both can employ a dichroic for 0.32-1.05 μm wavelength coverage (with reduced coverage from 0.56- 0.57 μm) and 3) the two LBT Utility Camera in the Infrared instruments (LUCIs), are each mounted at a bent-front Gregorian f/15 focus of a primary mirror. LUCI-1 and 2 are designed for seeing-limited (4'× 4' FOV) and active optics using thin-shell adaptive secondary mirrors (0.5'× 0.5' FOV) imaging and spectroscopy over the wavelength range of 0.95-2.5 μm and spectroscopic resolutions of 400 <= R <= 11000 (depending on the combination of grating, slits, and cameras used). The spectroscopic capabilities also include 32 interchangeable multi-object or longslit masks which are cryogenically cooled. Currently all facility instruments are in-place at the LBT and, for the first time, have been on-sky for science observations. In Summer 2015 LUCI-1 was refurbished to replace the infrared detector; to install a high-resolution camera to take advantage of the active optics SX secondary; and to install a grating designed primarily for use with high resolution active optics. Thus, like MODS-1 and -2, both LUCIs now have specifications nearly identical to each other. The software interface for both LUCIs have also been replaced, allowing both instruments to be run together from a single interface. With the installation of all facility instruments finally complete we also report on the first science use of "mixed-mode" operations, defined as the combination of different paired instruments with each mirror (i.e. LBC+MODS, LBC+LUCI, LUCI+MODS). Although both primary mirrors reside on a single fixed mount, they are capable of operating as independent entities within a defined "co-pointing" limit. This provides users with the additional capability to independently dither each mirror or center observations on two different sets of spatial coordinates within this limit.

  2. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  3. Lucy's back: Reassessment of fossils associated with the A.L. 288-1 vertebral column.

    PubMed

    Meyer, Marc R; Williams, Scott A; Smith, Michael P; Sawyer, Gary J

    2015-08-01

    The Australopithecus afarensis partial skeleton A.L. 288-1, popularly known as "Lucy" is associated with nine vertebrae. The vertebrae were given provisional level assignments to locations within the vertebral column by their discoverers and later workers. The continuity of the thoracic series differs in these assessments, which has implications for functional interpretations and comparative studies with other fossil hominins. Johanson and colleagues described one vertebral element (A.L. 288-1am) as uniquely worn amongst the A.L. 288-1 fossil assemblage, a condition unobservable on casts of the fossils. Here, we reassess the species attribution and serial position of this vertebral fragment and other vertebrae in the A.L. 288-1 series. When compared to the other vertebrae, A.L. 288-1am falls well below the expected size within a given spinal column. Furthermore, we demonstrate this vertebra exhibits non-metric characters absent in hominoids but common in large-bodied papionins. Quantitative analyses situate this vertebra within the genus Theropithecus, which today is solely represented by the gelada baboon but was the most abundant cercopithecoid in the KH-1s deposit at Hadar where Lucy was discovered. Our additional analyses confirm that the remainder of the A.L. 288-1 vertebral material belongs to A. afarensis, and we provide new level assignments for some of the other vertebrae, resulting in a continuous articular series of thoracic vertebrae, from T6 to T11. This work does not refute previous work on Lucy or its importance for human evolution, but rather highlights the importance of studying original fossils, as well as the efficacy of the scientific method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. "Lucy" (A.L. 288-1) had five sacral vertebrae.

    PubMed

    Russo, Gabrielle A; Williams, Scott A

    2015-02-01

    A "long-backed" scenario of hominin vertebral evolution posits that early hominins possessed six lumbar vertebrae coupled with a high frequency of four sacral vertebrae (7:12-13:6:4), a configuration acquired from a hominin-panin last common ancestor (PLCA) having a vertebral formula of 7:13:6-7:4. One founding line of evidence for this hypothesis is the recent assertion that the "Lucy" sacrum (A.L. 288-1an, Australopithecus afarensis) consists of four sacral vertebrae and a partially-fused first coccygeal vertebra (Co1), rather than five sacral vertebrae as in modern humans. This study reassesses the number of sacral vertebrae in Lucy by reexamining the distal end of A.L.288-1an in the context of a comparative sample of modern human sacra and Co1 vertebrae, and the sacrum of A. sediba (MH2). Results demonstrate that, similar to S5 in modern humans and A. sediba, the last vertebra in A.L. 288-1an exhibits inferiorly-projecting (right side) cornua and a kidney-shaped inferior body articular surface. This morphology is inconsistent with that of fused or isolated Co1 vertebrae in humans, which either lack cornua or possess only superiorly-projecting cornua, and have more circularly-shaped inferior body articular surfaces. The level at which the hiatus' apex is located is also more compatible with typical five-element modern human sacra and A. sediba than if only four sacral vertebrae are present. Our observations suggest that A.L. 288-1 possessed five sacral vertebrae as in modern humans; thus, sacral number in "Lucy" does not indicate a directional change in vertebral count that can provide information on the PLCA ancestral condition. © 2015 Wiley Periodicals, Inc.

  5. Developmental identity versus typology: Lucy has only four sacral segments.

    PubMed

    Machnicki, Allison L; Lovejoy, C Owen; Reno, Philip L

    2016-08-01

    Both interspecific and intraspecific variation in vertebral counts reflect the action of patterning control mechanisms such as Hox. The preserved A.L. 288-1 ("Lucy") sacrum contains five fused elements. However, the transverse processes of the most caudal element do not contact those of the segment immediately craniad to it, leaving incomplete sacral foramina on both sides. This conforms to the traditional definition of four-segmented sacra, which are very rare in humans and African apes. It was recently suggested that fossilization damage precludes interpretation of this specimen and that additional sacral-like features of its last segment (e.g., the extent of the sacral hiatus) suggest a general Australopithecus pattern of five sacral vertebrae. We provide updated descriptions of the original Lucy sacrum. We evaluate sacral/coccygeal variation in a large sample of extant hominoids and place it within the context of developmental variation in the mammalian vertebral column. We report that fossilization damage did not shorten the transverse processes of the fifth segment of Lucy's sacrum. In addition, we find that the extent of the sacral hiatus is too variable in apes and hominids to provide meaningful information on segment identity. Most importantly, a combination of sacral and coccygeal features is to be expected in vertebrae at regional boundaries. The sacral/caudal boundary appears to be displaced cranially in early hominids relative to extant African apes and humans, a condition consistent with the likely ancestral condition for Miocene hominoids. While not definitive in itself, a four-segmented sacrum accords well with the "long-back" model for the Pan/Homo last common ancestor. Am J Phys Anthropol 160:729-739, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Enhanced spatial resolution in fluorescence molecular tomography using restarted L1-regularized nonlinear conjugate gradient algorithm.

    PubMed

    Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing

    2014-04-01

    Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.

  7. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  8. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  10. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  11. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    PubMed

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  12. Lovers, enemies, and friends: The complex and coded early history of lesbian comic strip characters.

    PubMed

    McGurk, Caitlin

    2018-05-31

    This article seeks to recuperate four previously unexamined early newspaper comic strip characters that could lay the groundwork for queer comic studies. The titular characters in Lucy and Sophie Say Goodbye (1905), Sanjak in Terry and the Pirates (1939) by Milton Caniff, and Hank O'Hair in Brenda Starr, Reporter (1940) by Dale Messick are analyzed through close readings, supporting archival material, and interviews. The article also theorizes the identification of the creator of Lucy and Sophie Say Goodbye as George O. Frink, and offers an overview of LGBTQ comics holdings at institutions in North America.

  13. Digital enhancement of sub-quality bitemark photographs.

    PubMed

    Karazalus, C P; Palmbach, T T; Lee, H C

    2001-07-01

    Digital enhancement software was used to enhance bitemark photographs. This enhancement technique improved the resolution of the bitemark images. Lucis was the software program utilized in this study and case applications. First, this technique was applied on known bitemark images to evaluate the potential effectiveness of this digital enhancement method. Subsequently, Lucis was utilized on two separate unsolved cases involving enhancement of bitemark evidence. One case involved a severely beaten infant with a bitemark on the upper thigh. The second case involves a bitemark observed on the breast of a female sexual assault strangulation victim. In both cases, bitemark images were significantly improved after digital enhancement.

  14. First on-sky results with ARGOS at LBT

    NASA Astrophysics Data System (ADS)

    Orban de Xivry, G.; Rabien, S.; Busoni, L.; Gaessler, W.; Bonaglia, M.; Borelli, J.; Deysenroth, M.; Esposito, S.; Gemperlein, H.; Kulas, M.; Lefebvre, M.; Mazzoni, T.; Peter, D.; Puglisi, A.; Raab, W.; Rahmer, G.; Sivitilli, A.; Storm, J.; Ziegleder, J.

    2016-07-01

    One year and an half after ARGOS first light, the Large Binocular Telescope (LBT) laser guided ground-layer adaptive optics (GLAO) system has been operated on both sides of the LBT. The system fulfills the GLAO promise and typically delivers an improvement by a factor of 2 in FWHM over the 4'×4' field of view of both Luci instruments, the two near-infrared imagers and multi-object spectrographs. In this paper, we report on the first on-sky results and analyze the performances based on the data collected so far. We also discuss adaptive optics procedures and the joint operations with Luci for science observations.

  15. Idiosyncratic gesture use in atypical language development, and its interaction with speech rhythm, word juncture, syntax, pragmatics and discourse: a case study.

    PubMed

    Howard, Sara J; Perkins, Michael R; Sowden, Hannah

    2012-10-01

    Very little is known about the use of gesture by children with developmental language disorders (DLDs). This case study of 'Lucy', a child aged 4;10 with a DLD, expands on what is known and in particular focuses on a type of idiosyncratic "rhythmic gesture" (RG) not previously reported. A fine-grained qualitative analysis was carried out of video recordings of Lucy in conversation with the first author. This revealed that Lucy's RG was closely integrated in complex ways with her use of other gesture types, speech rhythm, word juncture, syntax, pragmatics, discourse, visual processing and processing demands generally. Indeed, the only satisfactory way to explain it was as a partial byproduct of such interactions. These findings support the theoretical accounts of gesture which see it as just one component of a multimodal, integrated signalling system (e.g. Goldin-Meadow, S. (2000). Beyond words: The importance of gesture to researchers and learners. Child Development, 71(1), 231-239), and emergentist accounts of communication impairment which regard compensatory adaptation as integral (e.g. Perkins, M. R. (2007). Pragmatic Impairment. Cambridge: Cambridge University Press.).

  16. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  17. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  18. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  19. Weather Forecasting From Woolly Art to Solid Science

    NASA Astrophysics Data System (ADS)

    Lynch, P.

    THE PREHISTORY OF SCIENTIFIC FORECASTING Vilhelm Bjerknes Lewis Fry Richardson Richardson's Forecast THE BEGINNING OF MODERN NUMERICAL WEATHER PREDICTION John von Neumann and the Meteorology Project The ENIAC Integrations The Barotropic Model Primitive Equation Models NUMERICAL WEATHER PREDICTION TODAY ECMWF HIRLAM CONCLUSIONS REFERENCES

  20. Sand Dune Field in Richardson Crater

    NASA Image and Video Library

    2010-07-13

    This image from NASA Mars Reconnaissance Orbiter is a view of the sand dune field in Richardson Crater covered with seasonal frost. The frost is a combination of frozen carbon dioxide and some water ice that covers the dunes in the winter and spring.

  1. Lessons from Embryos: Haeckel's Embryo Drawings, Evolution, and Secondary Biology Textbooks

    ERIC Educational Resources Information Center

    Wellner, Karen L.

    2014-01-01

    In 1997, developmental biologist Michael Richardson compared his research team's embryo photographs to Ernst Haeckel's 1874 embryo drawings and called Haeckel's work "noncredible". "Science" soon published "Haeckel's Embryos: Fraud Rediscovered," and Richardson's comments further reinvigorated criticism of Haeckel by…

  2. Interprocedural Analysis and the Verification of Concurrent Programs

    DTIC Science & Technology

    2009-01-01

    SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses

  3. FPGA-accelerated algorithm for the regular expression matching system

    NASA Astrophysics Data System (ADS)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  4. Multifractal surrogate-data generation algorithm that preserves pointwise Hölder regularity structure, with initial applications to turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2017-03-01

    An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.

  5. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  6. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.

    PubMed

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.

  7. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  8. Gloria Richardson: Her Life and Work in SNCC.

    ERIC Educational Resources Information Center

    Cook, Melanie B.

    1988-01-01

    Gloria Richardson was one of the Black women activists whose work is largely unnoticed but who was an inspirational worker for civil rights. As the coordinator of the Cambridge branch of the Student Nonviolent Coordinating Committee, she challenged local racist laws and policies. (VM)

  9. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  10. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  11. Computerized tomography with total variation and with shearlets

    NASA Astrophysics Data System (ADS)

    Garduño, Edgar; Herman, Gabor T.

    2017-04-01

    To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.

  12. Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-12-01

    Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.

  13. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  14. A fast hybrid algorithm combining regularized motion tracking and predictive search for reducing the occurrence of large displacement errors.

    PubMed

    Jiang, Jingfeng; Hall, Timothy J

    2011-04-01

    A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE

  15. Optimization Review, Optimization Review, Sidney and Richardson Hill Road Landfills, Delaware County, New York

    EPA Pesticide Factsheets

    The Sidney Landfill site is located on Richardson Hill Road approximately 10 miles southeast of Sidney, New York. In March 1989, the site was added to the National Priorities List (NPL) based on investigations completed by the New York State Department...

  16. Trends in abundance of collared lemmings near Cape Churchill, Manitoba, Canada

    USGS Publications Warehouse

    Reiter, M.E.; Andersen, D.E.

    2008-01-01

    Regular, multiannual cycles observed in the population abundance of small mammals in many arctic and subarctic ecosystems have stimulated substantial research, particularly among population ecologists. Hypotheses of mechanisms generating regular cycles include predator-prey interactions, limitation of food resources, and migration or dispersal, as well as abiotic factors such as cyclic climatic variation and environmental stochasticity. In 2004 and 2005, we used indirect methods to estimate trends in population size of Richardson's collared lemmings (Dicrostonyx richardsoni) retrospectively, and evaluated the extent of synchrony between lemming populations at 2 coastal tundra study areas separated by approximately 60 km near Cape Churchill, Manitoba, Canada. We collected scars on willow plants (Salix) resulting from lemming feeding. Ages of scars ranged from 0 to 13 years at both study areas. Scar-age frequency appeared cyclic and we used nonlinear Poisson regression to model the observed scar-age frequency. Lemming populations cycled with 2.8-year periodicity and the phase of the cycle was synchronous between the 2 study areas. We suggest that our approach could be applied in multiple settings and may provide the most efficient way to gather data on small mammals across both space and time in a diversity of landscapes. ?? 2008 American Society of Mammalogists.

  17. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  18. Using Interactive Whiteboards to Enhance Mathematics Teaching

    ERIC Educational Resources Information Center

    Kent, Peter

    2006-01-01

    Over the past three years, Richardson Primary School has transformed its entire educational program based around the widespread introduction of interactive whiteboards (IWBs) into the school. A review of this initiative states that "Richardson is the first school in the ACT, and probably Australia, where the total school community, the…

  19. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

    PubMed Central

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002

  20. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  1. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  2. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/

  3. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  4. An overview and the current status of instrumentation at the Large Binocular Telescope Observatory

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark; Edwards, Michelle L.; Kuhn, Olga; Thompson, David; Veillet, Christian

    2014-07-01

    An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (24' × 24') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectrometer (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front-bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23 m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near- infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning and performing science observations on the LBT utilizing the installed adaptive secondary mirrors in both single-sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Installation and testing of the bench spectrograph will begin in July 2014. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. Both LUCI2 and MODS2 passed their laboratory acceptance milestones in the summer of 2013 and have been installed on the LBT. LUCI2 is currently being commissioned and the data analysis is well underway. Diffraction-limited commissioning of its adaptive optics modes will begin in the 2014B semester. MODS2 commissioning began in May 2014 and will completed in the 2014B semester as well. Binocular testing and commissioning of both the LUCI and MODS pairs will begin in 2014B with the goal that this capability could be offered sometime in 2015. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  5. A Tour of a New Paradigm: Relationships and Work

    ERIC Educational Resources Information Center

    Blustein, David L.; Medvide, Mary Beth; Kozan, Saliha

    2012-01-01

    The authors provide a reaction to the Major Contribution by Richardson in this issue of "The Counseling Psychologist" on the counseling for work and relationships perspective. The authors examine the trajectory of Richardson's work, beginning with her seminal article in 1993, which set the stage for a new paradigm for vocational psychology.…

  6. Debugging and Analysis of Large-Scale Parallel Programs

    DTIC Science & Technology

    1989-09-01

    Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution

  7. Discovering Art through Science: Elwyn Richardson's Environmental Curriculum

    ERIC Educational Resources Information Center

    MacDonald, Margaret

    2016-01-01

    Elwyn Richardson's work at Oruaiti School from 1949 to 1962 has been almost exclusively interpreted as a unique experiment in art and craft education, partially as a result of impact of his book, "In The Early World." The book is viewed as evidence of innovative departmental policies that allowed teachers wide latitude for…

  8. Holocene sedimentation in Richardson Bay, California

    USGS Publications Warehouse

    Connor, Cathy L.

    1983-01-01

    Examination of foraminifers, diatoms, ostracodes, clay mineralogy, and sediment-size variation from 9 borehole sites along the salt-marsh margins of Richardson Bay reveals a record of gradual infilling of fine-grained estuarine sediments. Over the past 10,000 years this area was transformed from a V-shaped Pleistocene stream valley to a flat-floored arm of the San Francisco Bay estuary. A radiocarbon date obtained from a basal peat overlying nonmarine alluvial sand near the town of Mill Valley indicates that stable salt-marsh vegetation was present in the northwestern arm of Richardson Bay 4600?165 years ago and agrees within error limits with a Holocene sea-level curve developed by Atwater, Hedel, and Helley in 1977 for southern San Francisco Bay. The average sedimentation rate over the last 4600 years is estimated to be 0.2 cm/yr for the inner part of the bay. Comparison of early maps with updated versions as well as studies of marsh plant zonations in disturbed and nondisturbed areas shows that almost half of the marsh in Richardson Bay has been leveed or filled since 1899.

  9. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-07-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  10. Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, E. L.; Molod, A.

    2014-01-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  11. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-03-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  12. Impacts of rural land-use on overland flow and sediment transport

    NASA Astrophysics Data System (ADS)

    Fraser, S. L.; Jackson, B. M.; Norton, K. P.

    2013-12-01

    The loss of fertile topsoil over time, due to erosive processes, could have a major impact on New Zealand's economy as well as being devastating to individual land owners. Improved management of land use is needed to provide protection of soil from erosion by overland flow and aeolian processes. Effects of soil erosion and sedimentation result in an annual nationwide cost of NZ$123 million. Many previous New Zealand studies have focused on large scale soil movement from land sliding and gully erosion, including identifying risk areas. However, long term small scale erosion and degradation has been largely overlooked in the literature. Although small scale soil erosion is less apparent than mass movement, cumulative small scale soil loss over many years may have a significant impact for future land productivity. One approach to assessing the role of soil degradation is through the application of landscape models. Due to the time consuming collection of data and limited scales over which data can be collected, many models created are unique to a particular land type, land use or locality. Collection of additional datasets can broaden the use of such models by informing model representation and enhancing parameterisation. The Land Use Capability Index (LUCI), developed by Jackson et al (2013) is an example of a model that will benefit from additional data sets. LUCI is a multi-criteria GIS tool, designed to inform land management decisions by identifying areas of potential change, based on land characteristics and land use options. LUCI topographically routes overland flow and sediment using existing land characteristic maps and additionally incorporating sub-field scale data. The model then has the ability to utilise these data to enhance prediction at landscape scale. This study focuses on the influence of land use on small scale sediment transport and enhancing process representation and parameterisation to improve predictive ability of models, such as LUCI. Data are currently being collected in a small catchment at the foothills of the Tararua ranges, lower North Island of New Zealand. Gurlach traps are utilised in a step like array on a number of hillslopes to provide a comprehensive dataset of overland flow and sediment volume for different magnitude rainfall events. ArcGIS is used to calculate a contributing area to each trap. The study provides quantitative data linking overland flow to event magnitude for the rural land uses of pasture versus regenerating native forest at multiple slope angles. These data along with measured soil depth/slope relationships and stream monitoring data are used to inform process representation and parameterisation of LUCI at hillslope scale. LUCI is then used to explore implications at landscape scale. The data and modelling are intended to provide information to help in long-term land management decisions. Jackson, B., Pagella, T., Sinclair, F., Orellana, B., Henshaw, A., Reynolds, B., McIntyre, N., Wheater, H., and Eycott, A. 2013. Polyscape: A GIS mapping framework providing efficient and spatially explicit landscape-scale valuation of multiple ecosystem services. Landscape and Urban Planning, 112(0): 74-88

  13. Concentration Measurements in Self-Excited Momentum Dominated Low-Density Gas Jets

    NASA Technical Reports Server (NTRS)

    Yildirim, B. S.; Pasumarthi, K. S.; Agrawal, A. K.

    2004-01-01

    Flow structure of self-excited, laminar, axisymmetric, momentum-dominated helium jets discharged vertically into ambient air was investigated using high-speed rainbow schlieren deflectometry technique. Measurements were obtained at temporal resolution of 1 ms and spatial resolution of 0.19 mm for two test cases with Richardson number of 0.034 and 0.018. Power spectra revealed that the oscillation frequency was independent of spatial coordinates, suggesting global oscillations in the flow. Abel inversion algorithm was used to reconstruct the concentration field of helium. Instantaneous concentration contours revealed changes in the flow field and evolution of vortical structures during an oscillation cycle. Temporal evolution plots of helium concentration at different axial locations provided detailed information about the instability in the flow field.

  14. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  15. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  16. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  17. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study.

    PubMed

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-21

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  18. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  19. NASA's New Discovery Missions

    NASA Image and Video Library

    2017-01-04

    On Jan. 4, 2017 NASA announced the selection of two missions to explore previously unexplored asteroids. The first mission, called Lucy, will study asteroids, known as Trojan asteroids, trapped by Jupiter’s gravity. The Psyche mission will explore a very large and rare object in the solar system’s asteroid belt that’s made of metal, and scientists believe might be the exposed core of a planet that lost its rocky outer layers from a series of violent collisions. Lucy is targeted for launch in 2021 and Psyche in 2023. Both missions have the potential to open new windows on one of the earliest eras in the history of our solar system – a time less than 10 million years after the birth of our sun.

  20. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  1. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  2. Significance of pH on the Cytotoxic Potential of the Water Disinfection By-Product Iodoacetic Acid

    EPA Science Inventory

    Significance of pH on the Cytotoxic Potential of the Water Disinfection By-Product Iodoacetic Acid Vicki Richardson1, Susan D. Richardson2, Mary Moyer3, Jane Ellen Simmons1, and Anthony DeAngelo1, 1U.S. Environmental Protection Agency, Research Triangle Park, NC, 2University of...

  3. 75 FR 53266 - United States Army Restricted Area, Designated Portions of Eagle Bay and Eagle River, Fort...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... subjected to hazardous levels of noise during certain training exercises; Army control of this area is also... within Fort Richardson. The restricted area is necessary to protect the public against hazardous noise... Flats Weapons Training Range Impact Area, Fort Richardson, Alaska; Restricted Area. (a) The area. The...

  4. 76 FR 48777 - Endangered and Threatened Wildlife and Plants; 12-Month Finding on a Petition To List the Nueces...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-09

    ... Frio River were very similar genetically to specimens collected in the Sabinal River (Richardson and... genetically from specimens collected in the Nueces River (Richardson and Gold 1995, p. 31). The genetic... in the Sabinal and Frio Rivers are genetically separate and distinct from the Cyprinella sp. found in...

  5. Buoyancy effects on the vapor condensation rate on a horizontal liquid surface

    NASA Technical Reports Server (NTRS)

    Hasan, Mohammad M.; Lin, Chin-Shun

    1989-01-01

    The results are presented of a numerical study of the effects of buoyancy on the direct condensation of saturated or nearly saturated vapor on a horizontal liquid surface in a cylindrical tank. The liquid motion beneath the liquid-vapor interface is induced by an axisymmetric laminar jet of subcooled liquid. Analysis and numerical results show that the dominant parameter which determines the influence of buoyancy on the condensation rate is the Richardson number. However, the effect of buoyancy on the condensation rate cannot be quantified in terms of the Richardson number alone. The critical value of the Richardson number below which the condensation rate is not significantly reduced depends on the Reynolds number as well as the Prandtl number.

  6. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    PubMed

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  7. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  8. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  9. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  10. Development of an atmospheric infrared radiation model with high clouds for target detection

    NASA Astrophysics Data System (ADS)

    Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin

    2016-10-01

    In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.

  11. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  12. Penalized weighted least-squares approach for multienergy computed tomography image reconstruction via structure tensor total variation regularization.

    PubMed

    Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua

    2016-10-01

    Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Iterative image reconstruction for multienergy computed tomography via structure tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.

  14. Ground-based characterization of Eurybates and Orus, two fly-by targets of the Lucy Discovery mission

    NASA Astrophysics Data System (ADS)

    Mottola, Stefano; Marchi, Simone; Buie, Marc W.; Hellmich, Stephan; Di Martino, Mario; Proffe, Gerrit; Levison, Harold F.; Zangari, Amanda Marie

    2016-10-01

    Lucy is a proposed NASA Discovery mission designed to perform close fly-bys with six Jupiter Trojan asteroids. The mission, which is currently in the Phase A development phase, is planned to launch in 2021 and arrive at the Trojan L4 cloud in 2027.We report on ground-based light curve observations of two of Lucy's fly-by target candidates: (3548) Eurybates and (21900) Orus. The goal is to characterize their shape, spin state and photometric properties both to aid in the planning of the mission, and to complement the space-borne data.Each object has been observed over 5 apparitions in a wide range of geocentric ecliptic longitudes. Shape and spin state modeling was performed by using the convex shape inversion method (Kaasalainen, Mottola & Fulchignoni, 2002). Eurybates is a retrograde rotator with a sidereal rotation Psid=8.702724±0.000009 h. It has a moderately elongated shape with equivalent axial ratios a/b=1.08, b/c=1.16. No obvious signs of global non-convexities and/or albedo variegation are detected in its light curves. Orus is also a retrograde rotator with a period Psid=13.48617±0.00007 h. Its approximate axial ratios are a/b=1.14, b/c=1.12. The presence of a large, planar facet in the proximity of the model's North Pole suggests the presence of a large polar crater.

  15. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism

    PubMed Central

    Lovejoy, C. Owen

    2015-01-01

    Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu”) can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic. PMID:25945314

  16. Words on Women

    ERIC Educational Resources Information Center

    Civil Rights Digest, 1974

    1974-01-01

    Incisive quotes from Lucy Stone, Florynce Kennedy, John Kenneth Galbraith, Elizabeth Janeway, Eldridge Cleaver, Marge Piercy and others on the need and right of women to be liberated from sex discrimination. (SF)

  17. Reflections on Elwyn Richardson Commemoration

    ERIC Educational Resources Information Center

    Devine, Nesta

    2016-01-01

    This article was written as the final presentation to be delivered at our day of reflection on the educational work of Elwyn Richardson. As such, the tone is somewhat different to that which is usual for this journal, but I elect to leave it substantially the same as it was when delivered. I address first the question of what we do when we mourn…

  18. Richardson Instructional Management System (RIMS). How to Blend a Computerized Objectives-Referenced Testing System, Distributive Data Processing, and Systemwide Evaluation.

    ERIC Educational Resources Information Center

    Riegel, N. Blyth

    Recent changes in the structure of curriculum and the instructional system in Texas have required a major reorganization of teaching, evaluating, budgeting, and planning activities in the local education agencies, which has created the need for a database. The history of Richardson Instructional Management System (RIMS), its data processing…

  19. Press Conference with Elliot L. Richardson, Secretary of HEW.

    ERIC Educational Resources Information Center

    Department of Health, Education, and Welfare, Washington, DC.

    Two documents were released to the press on January 18, 1973, by Secretary Richardson, one summarizing his term of office as Secretary of Health, Education, and Welfare, and one reporting on HEW potential for the seventies (SO 005 666, SO 005 699). In an introductory statement prior to the press conference, the question of whether or not we as a…

  20. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  1. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  2. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  3. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  4. A New Challenge for Compression Algorithms: Genetic Sequences.

    ERIC Educational Resources Information Center

    Grumbach, Stephane; Tahi, Fariza

    1994-01-01

    Analyzes the properties of genetic sequences that cause the failure of classical algorithms used for data compression. A lossless algorithm, which compresses the information contained in DNA and RNA sequences by detecting regularities such as palindromes, is presented. This algorithm combines substitutional and statistical methods and appears to…

  5. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  6. A Locally Adaptive Regularization Based on Anisotropic Diffusion for Deformable Image Registration of Sliding Organs

    PubMed Central

    Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc

    2014-01-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632

  7. A locally adaptive regularization based on anisotropic diffusion for deformable image registration of sliding organs.

    PubMed

    Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc

    2013-11-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.

  8. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  9. The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.

    PubMed

    Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei

    2018-07-01

    Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Locating the Discontinuities of a Bounded Function by the Partial Sums of its Fourier Series I: Periodical Case

    NASA Technical Reports Server (NTRS)

    Kvernadze, George; Hagstrom,Thomas; Shapiro, Henry

    1997-01-01

    A key step for some methods dealing with the reconstruction of a function with jump discontinuities is the accurate approximation of the jumps and their locations. Various methods have been suggested in the literature to obtain this valuable information. In the present paper, we develop an algorithm based on identities which determine the jumps of a 2(pi)-periodic bounded not-too-highly oscillating function by the partial sums of its differentiated Fourier series. The algorithm enables one to approximate the locations of discontinuities and the magnitudes of jumps of a bounded function. We study the accuracy of approximation and establish asymptotic expansions for the approximations of a 27(pi)-periodic piecewise smooth function with one discontinuity. By an appropriate linear combination, obtained via derivatives of different order, we significantly improve the accuracy. Next, we use Richardson's extrapolation method to enhance the accuracy even more. For a function with multiple discontinuities we establish simple formulae which "eliminate" all discontinuities of the function but one. Then we treat the function as if it had one singularity following the method described above.

  11. Obtaining sparse distributions in 2D inverse problems.

    PubMed

    Reci, A; Sederman, A J; Gladden, L F

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.

  12. Obtaining sparse distributions in 2D inverse problems

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.

  13. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  14. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  15. Shape regularized active contour based on dynamic programming for anatomical structure segmentation

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra

    2005-04-01

    We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.

  16. The founder of Vicks: Lunsford Richardson (1854-1919).

    PubMed

    Al Aboud, Khalid

    2010-01-01

    Vicks VapoRub (Procter & Gamble, Cincinnati, OH) is one of the most popular over-the-counter therapies in the world, used to provide relief from the symptoms of the common cold and non-life-threatening respiratory infections. Even as more advanced products have come and gone, VapoRub continues to dominate the market almost 9 decades after the death of its formulator, Lunsford Richardson (Figure).

  17. Interrogating the Trope of the Door in Multicultural Education: Framing Diplomatic Relations to Indigenous Political and Legal Difference

    ERIC Educational Resources Information Center

    Richardson, Troy A.

    2011-01-01

    In this essay Troy Richardson works to develop a conceptual framework and set of terms by which a diplomatic reception of different forms of law can be developed in multicultural education. Taking up the trope of the door in multiculturalist discourse as a site in which a welcoming of the difference of others is organized, Richardson interrogates…

  18. Grady Highway Extension (Ship Creek Crossing) Elmendorf Air Force Base and Fort Richardson, Alaska

    DTIC Science & Technology

    2005-06-01

    Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302... eastern alignment (“Park Route”) upstream of the Proposed Action connecting to Fort Richardson at Fifth Street; and, use of access at Arctic Valley...Wetland A ............................................................................ 3-19 Figure 3-8 Large Ponded Area on Eastern Portion of Wetland B

  19. Adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Tianzhen; Wang, Xiumei; Gao, Xinbo

    2018-04-01

    Nowadays, several datasets are demonstrated by multi-view, which usually include shared and complementary information. Multi-view clustering methods integrate the information of multi-view to obtain better clustering results. Nonnegative matrix factorization has become an essential and popular tool in clustering methods because of its interpretation. However, existing nonnegative matrix factorization based multi-view clustering algorithms do not consider the disagreement between views and neglects the fact that different views will have different contributions to the data distribution. In this paper, we propose a new multi-view clustering method, named adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization. The proposed algorithm can obtain the parts-based representation of multi-view data by nonnegative matrix factorization. Then, pairwise co-regularization is used to measure the disagreement between views. There is only one parameter to auto learning the weight values according to the contribution of each view to data distribution. Experimental results show that the proposed algorithm outperforms several state-of-the-arts algorithms for multi-view clustering.

  20. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  1. 76 FR 78007 - Ocean Transportation Intermediary License; Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ..., (Qualifying Individual), Ruba Hindi, Member, Application Type: New OFF License. Kemka USA Limited Liability Company (NVO & OFF), 421 Lucy Court, South Plainfield, NJ 07080, Officer: Hsiang (Rita) Y. Hsiao, Member...

  2. 1. Photocopied January 1973 from the Keystone Bridge Company Album, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Photocopied January 1973 from the Keystone Bridge Company Album, 1874. THE KEYSTONE BRIDGE COMPANY: LUCY FURNACE. - Baltimore & Ohio Railroad, Parkersburg Bridge, Ohio River, Parkersburg, Wood County, WV

  3. Proposal Writing.

    ERIC Educational Resources Information Center

    Grant, Andrew; And Others

    1988-01-01

    The basics of effective proposal writing, from content to structure to length, are presented in three articles: "Knowledge Is Power" (Andrew Grant, Emily S. Berkowitz), "Write on the Money" (Lucy Knight); and "The Problem Proposal." (MLW)

  4. Analytical Modeling of Groundwater Seepages to St. Lucie Estuary

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yeh, G.; Hu, G.

    2008-12-01

    In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of the analytical models presented here can fit the data at this site. To give better estimates at all sites numerical modeling that couple river hydraulics and groundwater flow involving less simplifications of and assumptions for the system may have to be adapted.

  5. Chlorinated hydrocarbon pesticides and polychlorinated biphenyls in sediment cores from San Francisco Bay

    USGS Publications Warehouse

    Venkatesan, M.I.; De Leon, R. P.; VanGeen, A.; Luoma, S.N.

    1999-01-01

    Sediment cores of known chronology from Richardson and San Pablo Bays in San Francisco Bay, CA, were analyzed for a suite of chlorinated hydrocarbon pesticides and polychlorinated biphenyls to reconstruct a historic record of inputs. Total DDTs (DDT = 2,4'- and 4,4'-dichlorodiphenyltrichloroethane and the metabolites, 2,4'- and 4,4'-DDE, -DDD) range in concentration from 4-21 ng/g and constitute a major fraction (> 84%) of the total pesticides in the top 70 cm of Richardson Bay sediment. A subsurface maximum corresponds to a peak deposition date of 1969-1974. The first measurable DDT levels are found in sediment deposited in the late 1930's. The higher DDT inventory in the San Pablo relative to the Richardson Bay core probably reflects the greater proximity of San Pablo Bay to agricultural activities in the watershed of the Sacramento and San Joaquin rivers. Total polychlorinated biphenyls (PCBs) occur at comparable levels in the two Bays (< 1-34 ng/g). PCBs are first detected in sediment deposited during the 1930's in Richardson Bay, about a decade earlier than the onset of detectable levels of DDTs. PCB inventories in San Pablo Bay are about a factor of four higher in the last four decades than in Richardson Bay, suggesting a distribution of inputs not as strongly weighed towards the upper reaches of the estuary as DDTs. The shallower subsurface maximum in PCBs compared to DDT in the San Pablo Bay core is consistent with the imposition of drastic source control measures four these constituents in 1970 and 1977 respectively. The observed decline in DDT and PCB levels towards the surface of both cores is consistent with a dramatic drop in the input of these pollutants once the effect of sediment resuspension and mixing is taken into account.

  6. Theoretical analyses of Baroclinic flows

    NASA Technical Reports Server (NTRS)

    Antar, B.

    1982-01-01

    A stability analysis of a thin horizontal rotating fluid layer which is subjected to arbitrary horizontal and vertical temperature gradients is presented. The basic state is a nonlinear Hadley cell which contains both Ekman and thermal boundary layers; it is given in closed form. The stability analysis is based on the linearized Navier-Stokes equations, and zonally symmetric perturbations in the form of waves propagating in the meridional direction are considered. Numerical methods were used for the stability problem. It was found that the instability sets in when the Richardson number is close to unity and that the critical Richardson number is a non-monotonic function of the Prandtl number. Further, it was found that the critical Richardson number decreases with increasing Ekman number until a critical value of the Ekman number is reached beyond which the fluid is stable.

  7. Computation of repetitions and regularities of biologically weighted sequences.

    PubMed

    Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K

    2006-01-01

    Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.

  8. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE PAGES

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...

    2018-03-09

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  9. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  10. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  11. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  12. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  13. 1913 Richardson Scales in 1913 head house, looking east. The ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1913 Richardson Scales in 1913 head house, looking east. The balance beam on the right side of the photograph is for the west hopper scale. To the left and right are two discharge charges from the hopper scales. On the left is an auger that fills the bins. In the background is the grain cleaner. - Stewart Company Grain Elevator, 16 West Carson Street, Pittsburgh, Allegheny County, PA

  14. Environmental Assessment for Watershed Enhancements at Joint Base Elmendorf-Richardson

    DTIC Science & Technology

    2013-07-03

    Potassium permanganate would be utilized to prevent lethal dose of rotenone migrating beyond the largest beaver dam on Otter Creek. Lowering the lake level...Finding of No Significant Impact JBER Joint Base Elmendorf-Richardson KMnO4 potassium permanganate MOA Municipality of Anchorage NEPA National...Potassium permanganate would be utilized to prevent lethal dose of rotenone migrating beyond the largest beaver dam on Otter Creek. Lowering the lake

  15. Ultrafast Laser Interaction Processes for Libs and Other Sensing Technologies

    DTIC Science & Technology

    2013-04-05

    Wang. Propagation of ultrashort pulses through water, Optics Express, (02 2007): . doi: 12/02/2009 8.00 Z. Chen, S. Mao. Femtosecond laser -induced...Richardson, "Nd:YAG-CO2 double- pulse laser -induced breakdown spectroscopy for explosive residues detection" SPIE Defense, Security, Sensing; Orlando, FL... Ultrashort Pulse Laser Workshop, Directed Energy Professional Society; Newton, MA, USA; 06/29,2009. 63. Martin C. Richardson, Michael Sigman

  16. Ground temperatures across the old and new roads at mile 130, Richardson highway during 1954-62

    USGS Publications Warehouse

    Jin, H.; Brewer, M.C.; Perkins, R.A.; ,

    2002-01-01

    Year-round studies of the geothermal impacts of road construction in a "warm" permafrost area were undertaken during 1954-1962 at six road sections across the Richardson and Glenn Highways, in the vicinity of Glennallen, Alaska. As a result, significant information was obtained regarding the temperatures, and changes in temperatures, in the permafrost beneath and adjacent to the highway sections.

  17. Class-first analysis in a continuum: an approach to the complexities of schools, society, and insurgent science

    NASA Astrophysics Data System (ADS)

    Valdiviezo, Laura Alicia

    2010-06-01

    This essay addresses Katherine Richardson Bruna's paper: Mexican Immigrant Transnational Social Capital and Class Transformation: Examining the Role of Peer Mediation in Insurgent Science, through five main points . First, I offer a comparison between the traditional analysis of classism in Latin America and Richardson Bruna's call for a class-first analysis in the North American social sciences where there has been a tendency to obviate the specific examination of class relations and class issues. Secondly, I discuss that a class-first analysis solely cannot suffice to depict the complex dimensions in the relations of schools and society. Thus, I suggest a continuum in the class-first analysis. Third, I argue that social constructions surrounding issues of language, ethnicity, and gender necessarily intersect with issues of class and that, in fact, those other constructions offer compatible epistemologies that aid in representing the complexity of social and institutional practices in the capitalist society. Richardson Bruna's analysis of Augusto's interactions with his teacher and peers in the science class provides a fourth point of discussion in this essay. As a final point in my response I discuss Richardson Bruna's idea of making accessible class-first analysis knowledge to educators and especially to science teachers.

  18. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  19. Differentially Private Empirical Risk Minimization

    PubMed Central

    Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.

    2011-01-01

    Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342

  20. Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging

    PubMed Central

    Guo, Qijia; Wang, Jie; Chang, Tianying

    2017-01-01

    The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083

  1. 7 CFR 905.13 - District.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... County of St. Lucie and that part of the Counties of Brevard, Indian River, Martin, and Palm Beach... Counties of Palm Beach and Martin not included in Regulation Area II. (e) Citrus District Five shall...

  2. 7 CFR 905.13 - District.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... County of St. Lucie and that part of the Counties of Brevard, Indian River, Martin, and Palm Beach... Counties of Palm Beach and Martin not included in Regulation Area II. (e) Citrus District Five shall...

  3. 7 CFR 905.13 - District.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... County of St. Lucie and that part of the Counties of Brevard, Indian River, Martin, and Palm Beach... Counties of Palm Beach and Martin not included in Regulation Area II. (e) Citrus District Five shall...

  4. 7 CFR 905.13 - District.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... County of St. Lucie and that part of the Counties of Brevard, Indian River, Martin, and Palm Beach... Counties of Palm Beach and Martin not included in Regulation Area II. (e) Citrus District Five shall...

  5. 25. Historic American Buildings Survey, Donald W. Dickensheets, Photographer. April ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. Historic American Buildings Survey, Donald W. Dickensheets, Photographer. April 8, 1940. ST. 'LUCY' or 'LUCIA'. (SOUTH ELEVATION). FACADE - San Xavier del Bac Mission, Mission Road, Tucson, Pima County, AZ

  6. Lucy Wills (1888-1964): the life and research of an adventurous independent woman.

    PubMed

    Bastian, H

    2008-04-01

    Lucy Wills was one of a pioneering generation of women in medicine and medical research in England. After a double first honours degree in botany and geology from Cambridge in 1911, she travelled to South Africa, where she worked as a nurse during the First World War. Wills then gained a medical degree in London in 1920. By the late 1920s she had developed an interest in haematology and began travelling to India to investigate pernicious anaemia in pregnancy. There she identified a substance often called 'the Wills' factor', which was later recognised as folic acid. Wills undertook a placebo trial of routine iron supplementation in pregnant women during the Second World War, hampered, but not stopped, by bombing. In retirement, she continued to study nutritional effects on health in South Africa and Fiji.

  7. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  8. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  9. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING

    PubMed Central

    Liu, Meizhu; Vemuri, Baba C.

    2011-01-01

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  10. A Research Program in Computer Technology. 1985 Annual Technical Report

    DTIC Science & Technology

    1986-12-01

    The essence of the problem is that the modes of communication normally used between people are considerably richer than those ,-tR W V~ 34 between...Hansford Victoria Svoboda David Hollenberg Janna Tuckett Shih-Lien Lu Jasmin Witthoft Lee Richardson Craig Rogers Barden Smith Vance Tyree 10.1 PROBLEM...Jeff Deifik Lee Magnone Victoria Svoboda Joel Goldberg Janna Tuckett Wes Hansford Jasmin Witthoft Lee Richardson Craig Rogers Barden Smith Vance

  11. 44. #3 ARRESTING GEAR ENGINE AFT LOOKING FORWARD SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    44. #3 ARRESTING GEAR ENGINE - AFT LOOKING FORWARD SHOWING MURAL OF LUCY AND CHARLIE BROWN ON HYDRAULIC OIL ACCUMULATOR. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  12. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  13. Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj

    2012-07-03

    We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.

  14. Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoke; Sun, Penggang; Wang, Yu

    2018-04-01

    Many networks derived from society and nature are temporal and incomplete. The temporal link prediction problem in networks is to predict links at time T + 1 based on a given temporal network from time 1 to T, which is essential to important applications. The current algorithms either predict the temporal links by collapsing the dynamic networks or collapsing features derived from each network, which are criticized for ignoring the connection among slices. to overcome the issue, we propose a novel graph regularized nonnegative matrix factorization algorithm (GrNMF) for the temporal link prediction problem without collapsing the dynamic networks. To obtain the feature for each network from 1 to t, GrNMF factorizes the matrix associated with networks by setting the rest networks as regularization, which provides a better way to characterize the topological information of temporal links. Then, the GrNMF algorithm collapses the feature matrices to predict temporal links. Compared with state-of-the-art methods, the proposed algorithm exhibits significantly improved accuracy by avoiding the collapse of temporal networks. Experimental results of a number of artificial and real temporal networks illustrate that the proposed method is not only more accurate but also more robust than state-of-the-art approaches.

  15. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  16. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  17. 1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (a) Ext- General front view from southeast. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA

  18. 3. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (c) Ext-Detail entrance on south. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA

  19. Seagrass mitigation site modeling and assessment.

    DOT National Transportation Integrated Search

    2013-05-01

    Spatiotemporal analysis of Lake Surprise and SL-15 (15th spoil island in St. Lucie County) has allowed for a robust assessment of successful Florida Department of Transportation (FDOT) activities. Project results showed that bridge construction in La...

  20. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  1. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  2. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  3. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.

  4. ICT applications as e-health solutions in rural healthcare in the Eastern Cape Province of South Africa.

    PubMed

    Ruxwana, Nkqubela L; Herselman, Marlien E; Conradie, D Pieter

    Information and Communication Technology (ICT) solutions (e.g. e-health, telemedicine, e-education) are often viewed as vehicles to bridge the digital divide between rural and urban healthcare centres and to resolve shortcomings in the rural health sector. This study focused on factors perceived to influence the uptake and use of ICTs as e-health solutions in selected rural Eastern Cape healthcare centres, and on structural variables relating to these facilities and processes. Attention was also given to two psychological variables that may underlie an individual&s acceptance and use of ICTs: usefulness and ease of use. Recommendations are made with regard to how ICTs can be used more effectively to improve health systems at fi ve rural healthcare centres where questionnaire and interview data were collected: St. Lucy&s Hospital, Nessie Knight Hospital, the Tsilitwa Clinic, the Madzikane Ka-Zulu Memorial Hospital and the Nelson Mandela General Hospital.

  5. Environmental Assessment for Wildland Fire Prevention Activities at Joint-Base Elmendorf-Richardson (JBER), Alaska

    DTIC Science & Technology

    2015-04-01

    adjacent to water sources used for human water consumption should be avoided to protect fish habitat and water quality. If feasible in these areas, the...fire. The Lead PIO1 conferred with ADFG (Regional Supervisor, Sport Fish Division) and had someone calculate the effects of suppression related...threaten the northern containment line. 4. Protect anadromous fish habitat in the Richardson Clearwater and Delta River drainages from the effects of fire

  6. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  7. DLA TEST Defense Infrastructure: Improvement Needed in Energy Reporting and Security Funding at Installations with Limited Connectivity

    DTIC Science & Technology

    2016-01-01

    2013 Annual Energy Management Report 20 Figure 7: 1.4 Megawatt Generators at Landfill Gas Plant at Joint Base Elmendorf-Richardson, Alaska 24...has significant redundancy through its onsite landfill gas electrical generation plant which, in combination with back-up generators, can provide...DOD Energy Management Figure 7: 1.4 Megawatt Generators at Landfill Gas Plant at Joint Base Elmendorf- Richardson, Alaska We also found that the

  8. Temperature dependent current-voltage characteristics of Au/n-Si Schottky barrier diodes and the effect of transition metal oxides as an interface layer

    NASA Astrophysics Data System (ADS)

    Mahato, Somnath; Puigdollers, Joaquim

    2018-02-01

    Temperature dependent current-voltage (I‒V) characteristics of Au/n-type silicon (n-Si) Schottky barrier diodes have been investigated. Three transition metal oxides (TMO) are used as an interface layer between gold and silicon. The basic Schottky diode parameters such as ideality factor (n), barrier height (ϕb 0) and series resistance (Rs) are calculated and successfully explained by the thermionic emission (TE) theory. It has been found that ideality factor decreased and barrier height increased with increased of temperature. The conventional Richardson plot of ln(I0/T2) vs. 1000/T is determined the activation energy (Ea) and Richardson constant (A*). Whereas value of 'A*' is much smaller than the known theoretical value of n-type Si. The temperature dependent I-V characteristics obtained the mean value of barrier height (ϕb 0 bar) and standard deviation (σs) from the linear plot of ϕap vs. 1000/T. From the modified Richardson plot of ln(I0/T2) ˗ (qσ)2/2(kT)2 vs. 1000/T gives Richardson constant and homogeneous barrier height of Schottky diodes. Main observation in this present work is the barrier height and ideality factor shows a considerable change but the series resistance value exhibits negligible change due to TMO as an interface layer.

  9. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  10. Full-Field Measurements of Self-Excited Oscillations in Momentum-Dominated Helium Jets

    NASA Technical Reports Server (NTRS)

    Yildirim, B. S.; Agrawal, A. K.

    2005-01-01

    Flow structure of momentum-dominated helium jets discharged vertically into ambient air was investigated using a high-speed rainbow schlieren deflectometry (RSD) apparatus operated at up to 2000 Hz. The operating parameters, i.e., Reynolds number and Richardson number were varied independently to examine the self-excited, flow oscillatory behavior over a range of experimental conditions. Measurements revealed highly periodic oscillations in the laminar region at a unique frequency as well as high regularity in the flow transition and initial turbulent regions. The buoyancy was shown to affect the oscillation frequency and the distance from the jet exit to the flow transition plane. Instantaneous helium concentration contours across the field of view revealed changes in the jet flow structure and the evolution of the vortical structures during an oscillation cycle. A cross-correlation technique was applied to track the vortices and to find their convection velocity. Time-traces of helium concentration at different axial locations provided detailed information about the oscillating flow.

  11. Ground-based characterization of Leucus and Polymele, two fly-by targets of the Lucy Discovery mission

    NASA Astrophysics Data System (ADS)

    Buie, Marc W.; Zangari, Amanda Marie; Marchi, Simone; Mottola, Stefano; Levison, Harold F.

    2016-10-01

    Lucy is a proposed NASA Discovery mission designed to perform close fly-bys with six Jupiter Trojan asteroids. The mission, which is currently in the Phase A development phase, is planned to launch in 2021 and arrive at the L4 Trojan cloud in 2027. We report on the results of an observational campaign of (11351) Leucus and (15094) Polymele conducted this year. The goal is to characterize their shape, spin state and photometric properties to aid in mission planning and to complement the mission data. Leucus was previously observed by French et al (2013) where they reported a 514 hour rotation period with a lightcurve amplitude as high as 1 magnitude. Our data confirm a long-period and high-amplitude lightcurve but with a period closer to 440 hours. The lightcurve shape has a symmetric double-peaked shape with a ~0.7mag peak-to-peak amplitude. Initial results for Polymele indicate a low-amplitude lightcurve at or below 0.1 mag with a period near 4 hours. Thus, the Lucy target sample includes bodies with among the slowest and fastest rotation rates. Additional observations will be required to further refine the period and pole orientation for both bodies. This year's data are especially challenging due to observing at low galactic latitude. We will report on final results of this year's campaign along with our methods for removing field confusion using optimal image subtraction and photometric calibration based on the new APASS catalog (Henden et al, 2012).

  12. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  13. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  14. Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data

    NASA Astrophysics Data System (ADS)

    Kostsov, V. S.

    2015-03-01

    An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.

  15. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  16. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  17. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  18. 2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (b) Ext- General view rear, looking from north. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA

  19. Administrative Data Algorithms Can Describe Ambulatory Physician Utilization

    PubMed Central

    Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L

    2007-01-01

    Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448

  20. Transition Metal Switchable Mirror

    ScienceCinema

    None

    2017-12-09

    The switchable-mirrors technology was developed by Tom Richardson and Jonathan Slack of Berkeley Lab's Environmental Energy Technologies Division. By using transition metals rather than the rare earth metals used in the first metal-hydride switchable mirrors, Richardson and Slack were able to lower the cost and simplify the manufacturing process. Energy performance is improved as well, because the new windows can reflect or transmit both visible and infrared light. Besides windows for offices and homes, possible applications include automobile sunroofs, signs and displays, aircraft windows, and spacecraft.

  1. Transition Metal Switchable Mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-08-21

    The switchable-mirrors technology was developed by Tom Richardson and Jonathan Slack of Berkeley Lab's Environmental Energy Technologies Division. By using transition metals rather than the rare earth metals used in the first metal-hydride switchable mirrors, Richardson and Slack were able to lower the cost and simplify the manufacturing process. Energy performance is improved as well, because the new windows can reflect or transmit both visible and infrared light. Besides windows for offices and homes, possible applications include automobile sunroofs, signs and displays, aircraft windows, and spacecraft.

  2. MaRIE 1.0: A briefing to Katherine Richardson-McDaniel, Staff Member for U. S. Senator Martin Heinrich (D-NM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Cris William

    At the request of Katherine Richardson-McDaniel, Staff Member to U.S. Senator Martin Heinrich (D-NM), a high-level briefing was requested about MaRIE 1.0, the Matter-Radiation Interactions in Extremes effort at Los Alamos National Laboratory. What it would be, the mission need motivation, the scientific challenge, and the current favorable impact on both programs and people are shown in viewgraph form.

  3. Imaging Total Stations - Modular and Integrated Concepts

    NASA Astrophysics Data System (ADS)

    Hauth, Stefan; Schlüter, Martin

    2010-05-01

    Keywords: 3D-Metrology, Engineering Geodesy, Digital Image Processing Initialized in 2009, the Institute for Spatial Information and Surveying Technology i3mainz, Mainz University of Applied Sciences, forces research towards modular concepts for imaging total stations. On the one hand, this research is driven by the successful setup of high precision imaging motor theodolites in the near past, on the other hand it is pushed by the actual introduction of integrated imaging total stations to the positioning market by the manufacturers Topcon and Trimble. Modular concepts for imaging total stations are manufacturer independent to a large extent and consist of a particular combination of accessory hardware, software and algorithmic procedures. The hardware part consists mainly of an interchangeable eyepiece adapter offering opportunities for digital imaging and motorized focus control. An easy assembly and disassembly in the field is possible allowing the user to switch between the classical and the imaging use of a robotic total station. The software part primarily has to ensure hardware control, but several level of algorithmic support might be added and have to be distinguished. Algorithmic procedures allow to reach several levels of calibration concerning the geometry of the external digital camera and the total station. We deliver insight in our recent developments and quality characteristics. Both the modular and the integrated approach seem to have its individual strengths and weaknesses. Therefore we expect that both approaches might point at different target applications. Our aim is a better understanding of appropriate applications for robotic imaging total stations. First results are presented. Stefan Hauth, Martin Schlüter i3mainz - Institut für Raumbezogene Informations- und Messtechnik FH Mainz University of Applied Sciences Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany

  4. Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2015-11-01

    A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  5. Photorefraction Screens Millions for Vision Disorders

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Who would have thought that stargazing in the 1980s would lead to hundreds of thousands of schoolchildren seeing more clearly today? Collaborating with research ophthalmologists and optometrists, Marshall Space Flight Center scientists Joe Kerr and the late John Richardson adapted optics technology for eye screening methods using a process called photorefraction. Photorefraction consists of delivering a light beam into the eyes where it bends in the ocular media, hits the retina, and then reflects as an image back to a camera. A series of refinements and formal clinical studies followed their highly successful initial tests in the 1980s. Evaluating over 5,000 subjects in field tests, Kerr and Richardson used a camera system prototype with a specifically angled telephoto lens and flash to photograph a subject s eye. They then analyzed the image, the cornea and pupil in particular, for irregular reflective patterns. Early tests of the system with 1,657 Alabama children revealed that, while only 111 failed the traditional chart test, Kerr and Richardson s screening system found 507 abnormalities.

  6. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  7. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  8. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  10. Variability and Maintenance of Turbulence in the Very Stable Boundary Layer

    NASA Astrophysics Data System (ADS)

    Mahrt, Larry

    2010-04-01

    The relationship of turbulence quantities to mean flow quantities, such as the Richardson number, degenerates substantially for strong stability, at least in those studies that do not place restrictions on minimum turbulence or non-stationarity. This study examines the large variability of the turbulence for very stable conditions by analyzing four months of turbulence data from a site with short grass. Brief comparisons are made with three additional sites, one over short grass on flat terrain and two with tall vegetation in complex terrain. For very stable conditions, any dependence of the turbulence quantities on the mean wind speed or bulk Richardson number becomes masked by large scatter, as found in some previous studies. The large variability of the turbulence quantities is due to random variations and other physical influences not represented by the bulk Richardson number. There is no critical Richardson number above which the turbulence vanishes. For very stable conditions, the record-averaged vertical velocity variance and the drag coefficient increase with the strength of the submeso motions (wave motions, solitary waves, horizontal modes and numerous more complex signatures). The submeso motions are on time scales of minutes and not normally considered part of the mean flow. The generation of turbulence by such unpredictable motions appears to preclude universal similarity theory for predicting the surface stress for very stable conditions. Large variation of the stress direction with respect to the wind direction for the very stable regime is also examined. Needed additional work is noted.

  11. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  12. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  13. A Regularized Neural Net Approach for Retrieval of Atmospheric and Surface Temperatures with the IASI Instrument

    NASA Technical Reports Server (NTRS)

    Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.

  14. 33 CFR 80.727 - Cape Canaveral, FL to Miami Beach, FL.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 80°09.7′ W.) drawn across St. Lucie Inlet. (e) A line drawn from the seaward extremity of Jupiter Inlet North Jetty to the northeast extremity of the concrete apron on the south side of Jupiter Inlet...

  15. 33 CFR 80.727 - Cape Canaveral, FL to Miami Beach, FL.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 80°09.7′ W.) drawn across St. Lucie Inlet. (e) A line drawn from the seaward extremity of Jupiter Inlet North Jetty to the northeast extremity of the concrete apron on the south side of Jupiter Inlet...

  16. 33 CFR 80.727 - Cape Canaveral, FL to Miami Beach, FL.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 80°09.7′ W.) drawn across St. Lucie Inlet. (e) A line drawn from the seaward extremity of Jupiter Inlet North Jetty to the northeast extremity of the concrete apron on the south side of Jupiter Inlet...

  17. 33 CFR 80.727 - Cape Canaveral, FL to Miami Beach, FL.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 80°09.7′ W.) drawn across St. Lucie Inlet. (e) A line drawn from the seaward extremity of Jupiter Inlet North Jetty to the northeast extremity of the concrete apron on the south side of Jupiter Inlet...

  18. Troubling Practices: Short Responses

    ERIC Educational Resources Information Center

    Anderson, Gary; Simic, Lena; Haley, David; Svendsen, Zoe; Neal, Lucy; Samba, Emelda Ngufor

    2012-01-01

    In this "RiDE" themed edition on environmentalism, some short pieces are chosen where practitioners describe their own specific environmental practices. Zoe Svendsen and Lucy Neal point to the positives in two commissioned works ("The Trashcatchers' Carnival" and "3rd Ring Out"), underlining the importance of…

  19. 33 CFR 80.727 - Cape Canaveral, FL to Miami Beach, FL.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 80°09.7′ W.) drawn across St. Lucie Inlet. (e) A line drawn from the seaward extremity of Jupiter Inlet North Jetty to the northeast extremity of the concrete apron on the south side of Jupiter Inlet...

  20. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  1. Atlas of Absorption Lines from 0 to 17900 cm-1

    DTIC Science & Technology

    1987-09-01

    Hampton, Virginia H. M. Pickett Jet Propulsion Laboratory Pasadena, California D. J. Richardson and J. S. Namkung ST Systems Corporation (STX...2 NH3 HN03 OH HF HCi HBr HI CIO OCS H2CO H0C1 N2 HCN CH3C! H202 C2H2 C2H6 PH3 Oj(JPL) +- 0(3P)(JPL) H02(JPL) Solor CO...Hanscom AFB, Massachusetts. H. M. Pickett: Jet Propulsion Laboratory, Pasadena, California. D. J. Richardson and J. S. Namkung: ST Systems Corporation

  2. Transition Metal Switchable Mirror

    ScienceCinema

    None

    2017-12-29

    The switchable-mirrors technology was developed by Tom Richardson and Jonathan Slack of Berkeley Lab's Environmental Energy Technologies Division. By using transition metals rather than the rare earth metals used in the first metal-hydride switchable mirrors, Richardson and Slack were able to lower the cost and simplify the manufacturing process. Energy performance is improved as well, because the new windows can reflect or transmit both visible and infrared light. Besides windows for offices and homes, possible applications include automobile sunroofs, signs and displays, aircraft windows, and spacecraft. More information at: http://windows.lbl.gov/materials/chromogenics/default.htm

  3. Noisy Spins and the Richardson-Gaudin Model

    NASA Astrophysics Data System (ADS)

    Rowlands, Daniel A.; Lamacraft, Austen

    2018-03-01

    We study a system of spins (qubits) coupled to a common noisy environment, each precessing at its own frequency. The correlated noise experienced by the spins implies long-lived correlations that relax only due to the differing frequencies. We use a mapping to a non-Hermitian integrable Richardson-Gaudin model to find the exact spectrum of the quantum master equation in the high-temperature limit and, hence, determine the decay rate. Our solution can be used to evaluate the effect of inhomogeneous splittings on a system of qubits coupled to a common bath.

  4. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  5. Transition Metal Switchable Mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-08-21

    The switchable-mirrors technology was developed by Tom Richardson and Jonathan Slack of Berkeley Lab's Environmental Energy Technologies Division. By using transition metals rather than the rare earth metals used in the first metal-hydride switchable mirrors, Richardson and Slack were able to lower the cost and simplify the manufacturing process. Energy performance is improved as well, because the new windows can reflect or transmit both visible and infrared light. Besides windows for offices and homes, possible applications include automobile sunroofs, signs and displays, aircraft windows, and spacecraft. More information at: http://windows.lbl.gov/materials/chromogenics/default.htm

  6. SU-E-T-299: Small Fields Profiles Correction Through Detectors Spatial Response Functions and Field Size Dependence Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filipuzzi, M; Garrigo, E; Venencia, C

    2014-06-01

    Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less

  7. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  8. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less

  9. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.

    2011-04-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.

  10. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service

    PubMed Central

    Grude, Nils; Lindbaek, Morten

    2015-01-01

    Objective. To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Design. Randomized controlled trial. Setting. Out-of-hours service, Oslo, Norway. Intervention. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010–November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Subjects. Women (n = 441) aged 16–55 years. Mean age in both groups 27 years. Main outcome measures. Number of days until symptomatic resolution. Results. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Conclusion. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder. PMID:25961367

  11. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service.

    PubMed

    Bollestad, Marianne; Grude, Nils; Lindbaek, Morten

    2015-06-01

    To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Randomized controlled trial. Out-of-hours service, Oslo, Norway. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010-November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Women (n = 441) aged 16-55 years. Mean age in both groups 27 years. Number of days until symptomatic resolution. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder.

  12. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  13. 77 FR 6980 - Final Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... feet above ground [caret] Elevation in meters (MSL) Modified Unincorporated Areas of Nowata County... Sea Level, rounded to the nearest 0.1 meter. ADDRESSES Unincorporated Areas of Nowata County Maps are... Communities affected elevation above ground [caret] Elevation in meters (MSL) Modified St. Lucie County...

  14. The Australopithecus Afarensis (Lucy) of Higher Education.

    ERIC Educational Resources Information Center

    Gamble, John King

    1999-01-01

    Uses a fictitious character and story to express doubts about the use of business and marketing principles in American higher education. Asserts that higher education is profoundly different from other institutions, and that colleges and universities should be shielded from the vagaries of the market. (CAK)

  15. Life cycle assessment needs predictive spatial modelling for biodiversity and ecosystem services

    PubMed Central

    Chaplin-Kramer, Rebecca; Sim, Sarah; Hamel, Perrine; Bryant, Benjamin; Noe, Ryan; Mueller, Carina; Rigarlsford, Giles; Kulak, Michal; Kowal, Virginia; Sharp, Richard; Clavreul, Julie; Price, Edward; Polasky, Stephen; Ruckelshaus, Mary; Daily, Gretchen

    2017-01-01

    International corporations in an increasingly globalized economy exert a major influence on the planet's land use and resources through their product design and material sourcing decisions. Many companies use life cycle assessment (LCA) to evaluate their sustainability, yet commonly-used LCA methodologies lack the spatial resolution and predictive ecological information to reveal key impacts on climate, water and biodiversity. We present advances for LCA that integrate spatially explicit modelling of land change and ecosystem services in a Land-Use Change Improved (LUCI)-LCA. Comparing increased demand for bioplastics derived from two alternative feedstock-location scenarios for maize and sugarcane, we find that the LUCI-LCA approach yields results opposite to those of standard LCA for greenhouse gas emissions and water consumption, and of different magnitudes for soil erosion and biodiversity. This approach highlights the importance of including information about where and how land-use change and related impacts will occur in supply chain and innovation decisions. PMID:28429710

  16. Mudslide and/or animal attack are more plausible causes and circumstances of death for AL 288 ('Lucy'): A forensic anthropology analysis.

    PubMed

    Charlier, Phillippe; Coppens, Yves; Augias, Anaïs; Deo, Saudamini; Froesch, Philippe; Huynh-Charlier, Isabelle

    2018-01-01

    Following a global morphological and micro-CT scan examination of the original and cast of the skeleton of Australopithecus afarensis AL 288 ('Lucy'), Kappelman et al. have recently proposed a diagnosis of a fall from a significant height (a tree) as a cause of her death. According to topographical data from the discovery site, complete re-examination of a high-quality resin cast of the whole skeleton and forensic experience, we propose that the physical process of a vertical deceleration cannot be the only cause for her observed injuries. Two different factors were involved: rolling and multiple impacts in the context of a mudslide and an animal attack with bite marks, multi-focal fractures and violent movement of the body. It is important to consider a differential diagnosis of the observed fossil lesions because environmental factors should not be excluded in this ancient archaeological context as with any modern forensic anthropological case.

  17. Life cycle assessment needs predictive spatial modelling for biodiversity and ecosystem services

    NASA Astrophysics Data System (ADS)

    Chaplin-Kramer, Rebecca; Sim, Sarah; Hamel, Perrine; Bryant, Benjamin; Noe, Ryan; Mueller, Carina; Rigarlsford, Giles; Kulak, Michal; Kowal, Virginia; Sharp, Richard; Clavreul, Julie; Price, Edward; Polasky, Stephen; Ruckelshaus, Mary; Daily, Gretchen

    2017-04-01

    International corporations in an increasingly globalized economy exert a major influence on the planet's land use and resources through their product design and material sourcing decisions. Many companies use life cycle assessment (LCA) to evaluate their sustainability, yet commonly-used LCA methodologies lack the spatial resolution and predictive ecological information to reveal key impacts on climate, water and biodiversity. We present advances for LCA that integrate spatially explicit modelling of land change and ecosystem services in a Land-Use Change Improved (LUCI)-LCA. Comparing increased demand for bioplastics derived from two alternative feedstock-location scenarios for maize and sugarcane, we find that the LUCI-LCA approach yields results opposite to those of standard LCA for greenhouse gas emissions and water consumption, and of different magnitudes for soil erosion and biodiversity. This approach highlights the importance of including information about where and how land-use change and related impacts will occur in supply chain and innovation decisions.

  18. Fourier analysis algorithm for the posterior corneal keratometric data: clinical usefulness in keratoconus.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios

    2017-07-01

    To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  19. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  20. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  1. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  2. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  3. Some effects of horizontal discretization on linear baroclinic and symmetric instabilities

    NASA Astrophysics Data System (ADS)

    Barham, William; Bachman, Scott; Grooms, Ian

    2018-05-01

    The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.

  4. The Cluster AgeS Experiment (CASE). Detecting Aperiodic Photometric Variability with the Friends of Friends Algorithm

    NASA Astrophysics Data System (ADS)

    Rozyczka, M.; Narloch, W.; Pietrukowicz, P.; Thompson, I. B.; Pych, W.; Poleski, R.

    2018-03-01

    We adapt the friends of friends algorithm to the analysis of light curves, and show that it can be succesfully applied to searches for transient phenomena in large photometric databases. As a test case we search OGLE-III light curves for known dwarf novae. A single combination of control parameters allows us to narrow the search to 1% of the data while reaching a ≍90% detection efficiency. A search involving ≍2% of the data and three combinations of control parameters can be significantly more effective - in our case a 100% efficiency is reached. The method can also quite efficiently detect semi-regular variability. In particular, 28 new semi-regular variables have been found in the field of the globular cluster M22, which was examined earlier with the help of periodicity-searching algorithms.

  5. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  6. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  7. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  8. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  9. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  10. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  11. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  13. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  14. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  15. Characterization of WB/SiC Schottky Barrier Diodes Using I-V-T Method

    NASA Astrophysics Data System (ADS)

    Aldridge, James; Oder, Tom

    2009-04-01

    The importance of silicon carbide (SiC) semiconductor for high temperature and high power microelectronic device applications has long been established. We have fabricated SiC Schottky barrier diodes using tungsten boride (WB) as the Schottky contact. The diodes were characterized using the current-voltage-temperature method. The sample was mounted on a heated stage and the temperature varied from about 25 ^oC to 300 ^oC at intervals of 25 ^oC. From the Richardson's plot, we obtained an energy barrier height of 0.96 eV and a Richardson's constant of 71.2 AK-1cm-2. Using the modified Richardson's plot, we obtained a barrier height of 1.01 eV. From the variation of the ideality factor and the temperature, we determined a characteristic energy of 0.02 eV to 0.04 eV across the range of the measurement temperature. This implies that thermionic emission is dominant in the low measurement temperature range. Our results confirm the excellent thermal stability of WB/SiC Schottky barrier diodes.

  16. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  17. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  18. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  19. Categories and Music Transmission

    ERIC Educational Resources Information Center

    Gatien, Greg

    2009-01-01

    Lucy Green's (2008) "Music, Informal Learning, and the School: A New Classroom Pedagogy" gives rise to an interesting corollary. Does the manner of music's transmission inform one's understanding of a musical category? While categories of music can be difficult to define according to strict musical characteristics, a better understanding of…

  20. Writing (for) Survival: Continuity and Change in Four Contemporary Native American Women's Autobiographies.

    ERIC Educational Resources Information Center

    de Hernandez, J. Browdy

    1994-01-01

    Reviews four autobiographical texts by Native American women: "Talking Indian: Reflections on Survival and Writing" (Anna Lee Walters), "Storyteller" (Leslie Marmon Silko), "The Ways of My Grandmothers" (Beverly Hungry Wolf), and "Saanii Dahataal/The Women Are Singing" (Lucy Tapahonso). All rework the…

Top