Physics considerations in MV-CBCT multi-layer imager design.
Hu, Yue-Houng; Fueglistaller, Rony; Myronakis, Marios E; Rottmann, Joerg; Wang, Adam; Shedlock, Daniel; Morf, Daniel; Baturin, Paul; Huber, Pascal; Star-Lack, Josh M; Berbeco, Ross I
2018-05-30
Megavoltage (MV) cone-beam computed tomography (CBCT) using an electronic portal imaging (EPID) offers advantageous features, including 3D mapping, treatment beam registration, high-z artifact suppression, and direct radiation dose calculation. Adoption has been slowed by image quality limitations and concerns about imaging dose. Developments in imager design, including pixelated scintillators, structured phosphors, inexpensive scintillation materials, and multi-layer imager (MLI) architecture have been explored to improve EPID image quality and reduce imaging dose. The present study employs a hybrid Monte Carlo and linear systems model to determine the effect of detector design elements, such as multi-layer architecture and scintillation materials. We follow metrics of image quality including modulation transfer function (MTF) and noise power spectrum (NPS) from projection images to 3D reconstructions to in-plane slices and apply a task based figure-of-merit, the ideal observer signal-to-noise ratio (d') to determine the effect of detector design on object detectability. Generally, detectability was limited by detector noise performance. Deploying an MLI imager with a single scintillation material for all layers yields improvement in noise performance and d' linear with the number of layers. In general, improving x-ray absorption using thicker scintillators results in improved DQE(0). However, if light yield is low, performance will be affected by electronic noise at relatively high doses, resulting in rapid image quality degradation. Maximizing image quality in a heterogenous MLI detector (i.e. multiple different scintillation materials) is most affected by limiting imager noise. However, while a second-order effect, maximizing total spatial resolution of the MLI detector is a balance between the intensity contribution of each layer against its individual MTF. So, while a thinner scintillator may yield a maximal individual-layer MTF, its quantum efficiency will be relatively low in comparison to a thicker scintillator and thus, intensity contribution may be insufficient to noticeably improve the total detector MTF. © 2018 Institute of Physics and Engineering in Medicine.
Dual-Particle Imaging System with Neutron Spectroscopy for Safeguard Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, Michael C.; Weber, Thomas M.
2017-11-01
A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used iterative list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. For MLEM a stopping condition is required to achieve a good quality solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a goodmore » candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution. The application of SOE to the DPI is presented in this work.« less
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
Information theoretical assessment of visual communication with wavelet coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
Assessment of visual communication by information theory
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.
1994-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
On the assessment of visual communication by information theory
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1993-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Xu, Renfeng; Bradley, Arthur; Thibos, Larry N.
2013-01-01
Purpose We tested the hypothesis that pupil apodization is the basis for central pupil bias of spherical refractions in eyes with spherical aberration. Methods We employed Fourier computational optics in which we vary spherical aberration levels, pupil size, and pupil apodization (Stiles Crawford Effect) within the pupil function, from which point spread functions and optical transfer functions were computed. Through-focus analysis determined the refractive correction that optimized retinal image quality. Results For a large pupil (7 mm), as spherical aberration levels increase, refractions that optimize the visual Strehl ratio mirror refractions that maximize high spatial frequency modulation in the image and both focus a near paraxial region of the pupil. These refractions are not affected by Stiles Crawford Effect apodization. Refractions that optimize low spatial frequency modulation come close to minimizing wavefront RMS, and vary with level of spherical aberration and Stiles Crawford Effect. In the presence of significant levels of spherical aberration (e.g. C40 = 0.4 µm, 7mm pupil), low spatial frequency refractions can induce −0.7D myopic shift compared to high SF refraction, and refractions that maximize image contrast of a 3 cycle per degree square-wave grating can cause −0.75D myopic drift relative to refractions that maximize image sharpness. Discussion Because of small depth of focus associated with high spatial frequency stimuli, the large change in dioptric power across the pupil caused by spherical aberration limits the effective aperture contributing to the image of high spatial frequencies. Thus, when imaging high spatial frequencies, spherical aberration effectively induces an annular aperture defining that portion of the pupil contributing to a well-focused image. As spherical focus is manipulated during the refraction procedure, the dimensions of the annular aperture change. Image quality is maximized when the inner radius of the induced annulus falls to zero, thus defining a circular near paraxial region of the pupil that determines refraction outcome. PMID:23683093
McCafferty, Sean J; Schwiegerling, Jim T
2015-04-01
Present an analysis methodology for developing and evaluating accommodating intraocular lenses incorporating a deformable interface. The next generation design of extruded gel interface intraocular lens is presented. A prototype based upon similar previously in vivo proven design was tested with measurements of actuation force, lens power, interface contour, optical transfer function, and visual Strehl ratio. Prototype verified mathematical models were used to optimize optical and mechanical design parameters to maximize the image quality and minimize the required force to accommodate. The prototype lens produced adequate image quality with the available physiologic accommodating force. The iterative mathematical modeling based upon the prototype yielded maximized optical and mechanical performance through maximum allowable gel thickness to extrusion diameter ratio, maximum feasible refractive index change at the interface, and minimum gel material properties in Poisson's ratio and Young's modulus. The design prototype performed well. It operated within the physiologic constraints of the human eye including the force available for full accommodative amplitude using the eye's natural focusing feedback, while maintaining image quality in the space available. The parameters that optimized optical and mechanical performance were delineated as those, which minimize both asphericity and actuation pressure. The design parameters outlined herein can be used as a template to maximize the performance of a deformable interface intraocular lens. The article combines a multidisciplinary basic science approach from biomechanics, optical science, and ophthalmology to optimize an intraocular lens design suitable for preliminary animal trials.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Improvement of image quality of coherently illuminated objects in a turbulent atmosphere
NASA Astrophysics Data System (ADS)
Banakh, Viktor A.; Chen, Ben-Nam
1994-06-01
It is shown that the phenomenon of correlation of opposing waves may lead to improvement of image quality of coherently illuminated objects in a turbulent atmosphere in the case of strong intensity fluctuations. The extent of this improvement depends on the relation between sizes of the output and receiving apertures. The betterment of visibility in a turbulent atmosphere becomes maximal in the case of their proximity and vanishes if the sizes of illuminating and receiving apertures are distinguished from each other significantly.
Locally Enhanced Image Quality with Tunable Hybrid Metasurfaces
NASA Astrophysics Data System (ADS)
Shchelokova, Alena V.; Slobozhanyuk, Alexey P.; Melchakova, Irina V.; Glybovski, Stanislav B.; Webb, Andrew G.; Kivshar, Yuri S.; Belov, Pavel A.
2018-01-01
Metasurfaces represent a new paradigm in artificial subwavelength structures due to their potential to overcome many challenges typically associated with bulk metamaterials. The ability to make very thin structures and change their properties dynamically makes metasurfaces an exceptional meta-optics platform for engineering advanced electromagnetic and photonic metadevices. Here, we suggest and demonstrate experimentally a tunable metasurface capable of enhancing significantly the local image quality in magnetic resonance imaging. We present a design of the hybrid metasurface based on electromagnetically coupled dielectric and metallic elements. We demonstrate how to tailor the spectral characteristics of the metasurface eigenmodes by changing dynamically the effective permittivity of the structure. By maximizing a coupling between metasurface eigenmodes and transmitted and received fields in the magnetic resonance imaging (MRI) system, we enhance the device sensitivity that results in a substantial improvement of the image quality.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Is there a preference for linearity when viewing natural images?
NASA Astrophysics Data System (ADS)
Kane, David; Bertamío, Marcelo
2015-01-01
The system gamma of the imaging pipeline, defined as the product of the encoding and decoding gammas, is typically greater than one and is stronger for images viewed with a dark background (e.g. cinema) than those viewed in lighter conditions (e.g. office displays).1-3 However, for high dynamic range (HDR) images reproduced on a low dynamic range (LDR) monitor, subjects often prefer a system gamma of less than one,4 presumably reflecting the greater need for histogram equalization in HDR images. In this study we ask subjects to rate the perceived quality of images presented on a LDR monitor using various levels of system gamma. We reveal that the optimal system gamma is below one for images with a HDR and approaches or exceeds one for images with a LDR. Additionally, the highest quality scores occur for images where a system gamma of one is optimal, suggesting a preference for linearity (where possible). We find that subjective image quality scores can be predicted by computing the degree of histogram equalization of the lightness distribution. Accordingly, an optimal, image dependent system gamma can be computed that maximizes perceived image quality.
Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.
2016-01-01
In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, Kyle; Marleau, Peter; Brubaker, Erik
In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed imagemore » quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.« less
NASA Astrophysics Data System (ADS)
Banakh, Viktor A.; Sazanovich, Valentina M.; Tsvik, Ruvim S.
1997-09-01
The influence of diffraction on the object, coherently illuminated and viewed through a random medium from the same point, on the image quality betterment caused by the counter wave correlation is studied experimentally. The measurements were carried out with the use of setup modeling artificial convective turbulence. It is shown that in the case of spatially limited reflector with the Fresnel number of the reflector surface radius r ranging from 3 to 12 the contribution of the counter wave correlation into image intensity distribution is maximal as compared with the point objects (r
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
NASA Astrophysics Data System (ADS)
Malezan, A.; Tomal, A.; Antoniassi, M.; Watanabe, P. C. A.; Albino, L. D.; Poletti, M. E.
2015-11-01
In this work, a spectral reconstruction methodology for diagnostic X-ray, using Laplace inverse transform of the attenuation, was successfully applied to dental X-ray equipments. The attenuation curves of 8 commercially available dental X-ray equipment, from 3 different manufactures (Siemens, Gnatus and Dabi Atlante), were obtained by using an ionization chamber and high purity aluminium filters, while the kVp was obtained with a specific meter. A computational routine was implemented in order to adjust a model function, whose inverse Laplace transform is analytically known, to the attenuation curve. This methodology was validated by comparing the reconstructed and the measured (using semiconductor detector of cadmium telluride) spectra of a given dental X-ray unit. The spectral reconstruction showed the Dabi Atlante equipments generating similar shape spectra. This is a desirable feature from clinic standpoint because it produces similar levels of image quality and dose. We observed that equipments from Siemens and Gnatus generate significantly different spectra, suggesting that, for a given operating protocol, these units will present different levels of image quality and dose. This fact claims for the necessity of individualized operating protocols that maximize image quality and dose. The proposed methodology is suitable to perform a spectral reconstruction of dental X-ray equipments from the simple measurements of attenuation curve and kVp. The simplified experimental apparatus and the low level of technical difficulty make this methodology accessible to a broad range of users. The knowledge of the spectral distribution can help in the development of operating protocols that maximize image quality and dose.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang
2015-01-01
Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Global quality imaging: emerging issues.
Lau, Lawrence S; Pérez, Maria R; Applegate, Kimberly E; Rehani, Madan M; Ringertz, Hans G; George, Robert
2011-07-01
Quality imaging may be described as "a timely access to and delivery of integrated and appropriate procedures, in a safe and responsive practice, and a prompt delivery of an accurately interpreted report by capable personnel in an efficient, effective, and sustainable manner." For this article, radiation safety is considered as one of the key quality elements. The stakeholders are the drivers of quality imaging. These include those that directly provide or use imaging procedures and others indirectly supporting the system. Imaging is indispensable in health care, and its use has greatly expanded worldwide. Globalization, consumer sophistication, communication and technological advances, corporatization, rationalization, service outsourcing, teleradiology, workflow modularization, and commoditization are reshaping practice. This article defines the emerging issues; an earlier article in the May 2011 issue described possible improvement actions. The issues that could threaten the quality use of imaging for all countries include workforce shortage; increased utilization, population radiation exposure, and cost; practice changes; and efficiency drive and budget constraints. In response to these issues, a range of quality improvement measures, strategies, and actions are used to maximize the benefits and minimize the risks. The 3 measures are procedure justification, optimization of image quality and radiation protection, and error prevention. The development and successful implementation of such improvement actions require leadership, collaboration, and the active participation of all stakeholders to achieve the best outcomes that we all advocate. Copyright © 2011 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
NASA Astrophysics Data System (ADS)
Kim, Christopher Y.
1999-05-01
Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.
Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha
2017-11-01
We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung
2016-01-21
K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD.
The formation of quantum images and their transformation and super-resolution reading
NASA Astrophysics Data System (ADS)
Balakin, D. A.; Belinsky, A. V.
2016-05-01
Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezed states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.
Optimization of contrast-enhanced spectral mammography depending on clinical indication
Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria
2014-01-01
Abstract. The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality. PMID:26158058
Optimization of contrast-enhanced spectral mammography depending on clinical indication.
Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria
2014-10-01
The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality.
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
Muon tomography imaging improvement using optimized limited angle data
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Simon, Sean; Kindem, Joel; Luo, Weidong; Sossong, Michael J.; Steiger, Matthew
2014-05-01
Image resolution of muon tomography is limited by the range of zenith angles of cosmic ray muons and the flux rate at sea level. Low flux rate limits the use of advanced data rebinning and processing techniques to improve image quality. By optimizing the limited angle data, however, image resolution can be improved. To demonstrate the idea, physical data of tungsten blocks were acquired on a muon tomography system. The angular distribution and energy spectrum of muons measured on the system was also used to generate simulation data of tungsten blocks of different arrangement (geometry). The data were grouped into subsets using the zenith angle and volume images were reconstructed from the data subsets using two algorithms. One was a distributed PoCA (point of closest approach) algorithm and the other was an accelerated iterative maximal likelihood/expectation maximization (MLEM) algorithm. Image resolution was compared for different subsets. Results showed that image resolution was better in the vertical direction for subsets with greater zenith angles and better in the horizontal plane for subsets with smaller zenith angles. The overall image resolution appeared to be the compromise of that of different subsets. This work suggests that the acquired data can be grouped into different limited angle data subsets for optimized image resolution in desired directions. Use of multiple images with resolution optimized in different directions can improve overall imaging fidelity and the intended applications.
On the release of cppxfel for processing X-ray free-electron laser images.
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian
2016-06-01
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
On the release of cppxfel for processing X-ray free-electron laser images
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...
2016-05-11
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less
Wavefront sensorless adaptive optics temporal focusing-based multiphoton microscopy
Chang, Chia-Yuan; Cheng, Li-Chung; Su, Hung-Wei; Hu, Yvonne Yuling; Cho, Keng-Chi; Yen, Wei-Chung; Xu, Chris; Dong, Chen Yuan; Chen, Shean-Jen
2014-01-01
Temporal profile distortions reduce excitation efficiency and image quality in temporal focusing-based multiphoton microscopy. In order to compensate the distortions, a wavefront sensorless adaptive optics system (AOS) was integrated into the microscope. The feedback control signal of the AOS was acquired from local image intensity maximization via a hill-climbing algorithm. The control signal was then utilized to drive a deformable mirror in such a way as to eliminate the distortions. With the AOS correction, not only is the axial excitation symmetrically refocused, but the axial resolution with full two-photon excited fluorescence (TPEF) intensity is also maintained. Hence, the contrast of the TPEF image of a R6G-doped PMMA thin film is enhanced along with a 3.7-fold increase in intensity. Furthermore, the TPEF image quality of 1μm fluorescent beads sealed in agarose gel at different depths is improved. PMID:24940539
Morsbach, Fabian; Gordic, Sonja; Desbiolles, Lotus; Husarik, Daniela; Frauenfelder, Thomas; Schmidt, Bernhard; Allmendinger, Thomas; Wildermuth, Simon; Alkadhi, Hatem; Leschka, Sebastian
2014-08-01
To evaluate image quality, maximal heart rate allowing for diagnostic imaging, and radiation dose of turbo high-pitch dual-source coronary computed tomographic angiography (CCTA). First, a cardiac motion phantom simulating heart rates (HRs) from 60-90 bpm in 5-bpm steps was examined on a third-generation dual-source 192-slice CT (prospective ECG-triggering, pitch 3.2; rotation time, 250 ms). Subjective image quality regarding the presence of motion artefacts was interpreted by two readers on a four-point scale (1, excellent; 4, non-diagnostic). Objective image quality was assessed by calculating distortion vectors. Thereafter, 20 consecutive patients (median, 50 years) undergoing clinically indicated CCTA were included. In the phantom study, image quality was rated diagnostic up to the HR75 bpm, with object distortion being 1 mm or less. Distortion increased above 1 mm at HR of 80-90 bpm. Patients had a mean HR of 66 bpm (47-78 bpm). Coronary segments were of diagnostic image quality for all patients with HR up to 73 bpm. Average effective radiation dose in patients was 0.6 ± 0.3 mSv. Our combined phantom and patient study indicates that CCTA with turbo high-pitch third-generation dual-source 192-slice CT can be performed at HR up to 75 bpm while maintaining diagnostic image quality, being associated with an average radiation dose of 0.6 mSv. • CCTA is feasible with the turbo high-pitch mode. • Turbo high-pitch CCTA provides diagnostic image quality up to 73 bpm. • The radiation dose of high-pitch CCTA is 0.6 mSv on average.
Optimization of dose and image quality in adult and pediatric computed tomography scans
NASA Astrophysics Data System (ADS)
Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin
2017-11-01
Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, D. A.; Belinsky, A. V., E-mail: belinsky@inbox.ru
Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezedmore » states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.« less
The Advantages and Disadvantages of Using a Centralized In-House Marketing Office.
ERIC Educational Resources Information Center
Miller, Ronald H.
A centralized marketing and promotion office may or may not be a panacea for a continuing education program. Five major advantages to centralization of the marketing and promotion function are minimization of costs, a school-wide marketing strategy, maximization of the school image, enhanced quality control, and building of technical expertise of…
CUSTOM OPTIMIZATION OF INTRAOCULAR LENS ASPHERICITY
Koch, Douglas D.; Wang, Li
2007-01-01
Purpose To investigate the optimal amount of ocular spherical aberration (SA) in an intraocular lens (IOL) to maximize optical quality. Methods In 154 eyes of 94 patients aged 40 to 80 years, implantation of aspheric IOLs was simulated with different amounts of SA to produce residual ocular SA from −0.30 μm to +0.30 μm. Using the VOL-CT program (Sarver & Associates, Carbondale, Illinois), corneal wavefront aberrations up to 6th order were computed from corneal topographic elevation data (Humphrey Atlas, Carl Zeiss Meditec, Inc, Dublin, California). Using the ZernikeTool program (Advanced Medical Optics, Inc, Santa Ana, California), the polychromatic point spread function with Stiles-Crawford effect was calculated for the residual ocular higher-order aberrations (HOAs, 3rd to 6th order, 6-mm pupil), assuming fully corrected 2nd-order aberrations. Five parameters were used to quantify optical image quality, and we determined the residual ocular SA at which the maximal image quality was achieved for each eye. Stepwise multiple regression analysis was performed to assess the predictors for optimal SA of each eye. Results The optimal SA varied widely among eyes. Most eyes had best image quality with low amounts of negative SA. For modulation transfer function volume up to 15 cycles/degree, the amount of optimal SA could be predicted based on other HOAs of the cornea with coefficient of multiple determination (R2) of 79%. Eight Zernike terms significantly contributed to the optimal SA in this model; the order of importance to optimal SA from most to least was: Z60, Z62, Z42, Z53, Z64, Z3−1, Z33, and Z31. For the other 4 measures of visual quality, the coefficients of determination varied from 32% to 63%. Conclusion The amount of ocular SA producing best image quality varied widely among subjects and could be predicted based on corneal HOAs. Selection of an aspheric IOL should be customized according to the full spectrum of corneal HOAs and not 4th-order SA alone. PMID:18427592
Influence of X-ray scatter radiation on image quality in Digital Breast Tomosynthesis (DBT)
NASA Astrophysics Data System (ADS)
Rodrigues, M. J.; Di Maria, S.; Baptista, M.; Belchior, A.; Afonso, J.; Venâncio, J.; Vaz, P.
2017-11-01
Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging technique that was developed to solve the principal limitation of mammography, namely the overlapping tissue effect. This issue in standard mammography (SM) leads to two main problems: low sensitivity (difficulty to detect lesions) and low specificity (non-negligible percentage of false positives). Although DBT is now being introduced in clinical practice the features of this technique have not yet been fully and accurately assessed. Consequently, optimization studies in terms of choosing the most suitable parameters which maximize image quality according to the known limits of breast dosimetry are currently performing. In DBT, scatter radiation can lead to a loss of contrast and to an increase of image noise by reducing the signal-to-difference-noise ratio (SDNR) of a lesion. Moreover the use of an anti-scatter grid is a concern due to the low exposure of the photon flux available per projection. For this reason the main aim of this study was to analyze the influence of the scatter radiation on image quality and the dose delivered to the breast. In particular a detailed analysis of the scatter radiation on the optimal energy that maximizes the SDNR was performed for different monochromatic energies and voltages. To reach this objective the PenEasy Monte Carlo (MC) simulation tool imbedded in the general-purpose main program PENELOPE, was used. After a successful validation of the MC model with measurements, 2D projection images of primary, coherent and incoherent photons were obtained. For that, a homogeneous breast phantom (2, 4, 6, 8 cm) with 25%, 50% and 75% glandular compositions was used, including a 5 mm thick tumor. The images were generated for each monochromatic X-ray energies in the range from 16 keV to 32 keV. For each angular projection considered (25 angular projections covering an arc of 50°) the scatter-to-primary ratio (SPR), the mean glandular dose (MGD) and the signal difference to noise ratio (SDNR) were calculated with the aim to assess/determine in which conditions (i.e. energy, angular projection, breast thickness) the scatter radiation affects the image quality. The obtained results on the aforementioned quantities and topics will be reported.
Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.
Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya
2013-03-01
We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
Adaptive online self-gating (ADIOS) for free-breathing noncontrast renal MR angiography.
Xie, Yibin; Fan, Zhaoyang; Saouaf, Rola; Natsuaki, Yutaka; Laub, Gerhard; Li, Debiao
2015-01-01
To develop a respiratory self-gating method, adaptive online self-gating (ADIOS), for noncontrast MR angiography (NC MRA) of renal arteries to overcome some limitations of current free-breathing methods. A NC MRA pulse sequence for online respiratory self-gating was developed based on three-dimensional balanced steady-state free precession (bSSFP) and slab-selective inversion-recovery. Motion information was derived directly from the slab being imaged for online gating. Scan efficiency was maintained by an automatic adaptive online algorithm. Qualitative and quantitative assessments of image quality were performed and results were compared with conventional diaphragm navigator (NAV). NC MRA imaging was successfully completed in all subjects (n = 15). Similarly good image quality was observed in the proximal-middle renal arteries with ADIOS compared with NAV. Superior image quality was observed in the middle-distal renal arteries in the right kidneys with no NAV-induced artifacts. Maximal visible artery length was significantly longer with ADIOS versus NAV in the right kidneys. NAV setup was completely eliminated and scan time was significantly shorter with ADIOS on average compared with NAV. The proposed ADIOS technique for noncontrast MRA provides high-quality visualization of renal arteries with no diaphragm navigator-induced artifacts, simplified setup, and shorter scan time. © 2014 Wiley Periodicals, Inc.
Sharon, Jeffrey D; Northcutt, Benjamin G; Aygun, Nafi; Francis, Howard W
2016-10-01
To study the quality and usability of magnetic resonance imaging (MRI) obtained with a cochlear implant magnet in situ. Retrospective chart review. Tertiary care center. All patients who underwent brain MRI with a cochlear implant magnet in situ from 2007 to 2016. None. Grade of view of the ipsilateral internal auditory canal (IAC) and cerebellopontine angle (CPA). Inclusion criteria were met by 765 image sequences in 57 MRI brain scans. For the ipsilateral IAC, significant predictors of a grade 1 (normal) view included: absence of fat saturation algorithm (p = 0.001), nonaxial plane of imaging (p = 0.01), and contrast administration (p = 0.001). For the ipsilateral CPA, significant predictors of a grade 1 view included: absence of fat saturation algorithm (p = 0.001), high-resolution images (p = 0.001), and nonaxial plane of imaging (p = 0.001). Overall, coronal T1 high-resolution images produced the highest percentage of grade 1 views (89%). Fat saturation also caused a secondary ring-shaped distortion artifact, which impaired the view of the contralateral CPA 52.7% of the time, and the contralateral IAC 42.8% of the time. MRI scans without any usable (grade 1) sequences had fewer overall sequences (N = 4.3) than scans with at least one usable sequence (N = 7.1, p = 0.001). MRI image quality with a cochlear implant magnet in situ depends on several factors, which can be modified to maximize image quality in this unique patient population.
Samei, Ehsan; Saunders, Robert S.
2014-01-01
Dual-energy contrast-enhanced breast tomosynthesis is a promising technique to obtain three-dimensional functional information from the breast with high resolution and speed. To optimize this new method, this study searched for the beam quality that maximized image quality in terms of mass detection performance. A digital tomosynthesis system was modeled using a fast ray-tracing algorithm, which created simulated projection images by tracking photons through a voxelized anatomical breast phantom containing iodinated lesions. The single-energy images were combined into dual-energy images through a weighted log subtraction process. The weighting factor was optimized to minimize anatomical noise, while the dose distribution was chosen to minimize quantum noise. The dual-energy images were analyzed for the signal difference to noise ratio (SdNR) of iodinated masses. The fast ray-tracing explored 523,776 dual-energy combinations to identify which yields optimum mass SdNR. The ray-tracing results were verified using a Monte Carlo model for a breast tomosynthesis system with a selenium-based flat-panel detector. The projection images from our voxelized breast phantom were obtained at a constant total glandular dose. The projections were combined using weighted log subtraction and reconstructed using commercial reconstruction software. The lesion SdNR was measured in the central reconstructed slice. The SdNR performance varied markedly across the kVp and filtration space. Ray-tracing results indicated that the mass SdNR was maximized with a high-energy tungsten beam at 49 kVp with 92.5 μm of copper filtration and a low-energy tungsten beam at 49 kVp with 95 μm of tin filtration. This result was consistent with Monte Carlo findings. This mammographic technique led to a mass SdNR of 0.92 ± 0.03 in the projections and 3.68 ± 0.19 in the reconstructed slices. These values were markedly higher than those for non-optimized techniques. Our findings indicate that dual-energy breast tomosynthesis can be performed optimally at 49 kVp with alternative copper and tin filters, with reconstruction following weighted subtraction. The optimum technique provides best visibility of iodine against structured breast background in dual-energy contrast-enhanced breast tomosynthesis. PMID:21908902
NASA Astrophysics Data System (ADS)
Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.
2012-04-01
While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4) The clinically acceptable lowest imaging dose level is task dependent. In our study, 72.8 mAs is a safe dose level for visualizing low-contrast objects, while 12.2 total mAs is sufficient for detecting high-contrast objects of diameter greater than 3 mm.
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
Zhang, Weisheng; Lin, Jiang; Wang, Shaowu; Lv, Peng; Wang, Lili; Liu, Hao; Chen, Caizhong; Zeng, Mengsu
2014-01-01
This study was aimed to evaluate the accuracy of "True Fast Imaging with Steady-State Precession" (TrueFISP) MR angiography (MRA) for diagnosis of renal arterial stenosis (RAS) in hypertensive patients. Twenty-two patients underwent both TrueFISP MRA and contrast-enhanced MRA (CE-MRA) on a 1.5-T MR imager. Volume of main renal arteries, length of maximal visible renal arteries, number of visualized branches, stenotic grade, and subjective quality were compared. Paired 2-tailed Student t test and Wilcoxon signed rank test were applied to evaluate the significance of these variables. Volume of main renal arteries, length of maximal visible renal arteries, and number of branches indicated no significant difference between the 2 techniques (P > 0.05). Stenotic degree of 10 RAS was greater on CE-MRA than on TrueFISP MRA. Qualitative scores from TrueFISP MRA were higher than those from CE-MRA (P < 0.05). TrueFISP MRA is a reliable and accurate method for evaluating RAS.
A Stochastic Imaging Technique for Spatio-Spectral Characterization of Special Nuclear Material
NASA Astrophysics Data System (ADS)
Hamel, Michael C.
Radiation imaging is advantageous for detecting, locating and characterizing special nuclear material (SNM) in complex environments. A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. The steady-state solution produced by this iterative method will have poor quality compared to solutions produced with fewer iterations. A stopping condition is required to achieve a better solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a good candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution that has image quality comparable to the best MLEM solution. The application of SOE to the DPI is presented in this work. SOE was originally applied in medical imaging applications with no mechanism to isolate spectral information based on location. This capability is critical for non-proliferation applications as complex radiation environments with multiple sources are often encountered. This dissertation extends the SOE algorithm to produce spatially dependent spectra and presents experimental result showing that the technique was effective for isolating a 4.1-kg mass of weapons grade plutonium (WGPu) when other neutron and gamma-ray sources were present. This work also demonstrates the DPI as an effective tool for localizing and characterizing highly enriched uranium (HEU). A series of experiments were performed with the DPI using a deuterium-deuterium (DD) and deuterium-tritium (DT) neutron generator, as well as AmLi, to interrogate a 13.7-kg sphere of HEU. In all cases, the neutrons and gamma rays produced from induced fission were successfully discriminated from the interrogating particles to localize the HEU. For characterization, the fast neutron and gamma-ray spectra were recorded from multiple HEU configurations with low-Z and high-Z moderation. Further characterization of the configurations used the measured neutron lifetime to show that the DPI can be used to infer multiplication.
Bits and bytes: the future of radiology lies in informatics and information technology.
Brink, James A; Arenson, Ronald L; Grist, Thomas M; Lewin, Jonathan S; Enzmann, Dieter
2017-09-01
Advances in informatics and information technology are sure to alter the practice of medical imaging and image-guided therapies substantially over the next decade. Each element of the imaging continuum will be affected by substantial increases in computing capacity coincident with the seamless integration of digital technology into our society at large. This article focuses primarily on areas where this IT transformation is likely to have a profound effect on the practice of radiology. • Clinical decision support ensures consistent and appropriate resource utilization. • Big data enables correlation of health information across multiple domains. • Data mining advances the quality of medical decision-making. • Business analytics allow radiologists to maximize the benefits of imaging resources.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Accurate and robust brain image alignment using boundary-based registration.
Greve, Douglas N; Fischl, Bruce
2009-10-15
The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Wein, Lawrence M.; Baveja, Manas
2005-01-01
Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are ≈11–22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold. PMID:15894628
Wein, Lawrence M; Baveja, Manas
2005-05-24
Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are approximately 11-22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold.
SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitree, R; Guzman, G; Chundury, A
Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less
Huang, Zhiwei; Teh, Seng Khoon; Zheng, Wei; Mo, Jianhua; Lin, Kan; Shao, Xiaozhuo; Ho, Khek Yu; Teh, Ming; Yeoh, Khay Guan
2009-03-15
We report an integrated Raman spectroscopy and trimodal (white-light reflectance, autofluorescence, and narrow-band) imaging techniques for real-time in vivo tissue Raman measurements at endoscopy. A special 1.8 mm endoscopic Raman probe with filtering modules is developed, permitting effective elimination of interference of fluorescence background and silica Raman in fibers while maximizing tissue Raman collections. We demonstrate that high-quality in vivo Raman spectra of upper gastrointestinal tract can be acquired within 1 s or subseconds under the guidance of wide-field endoscopic imaging modalities, greatly facilitating the adoption of Raman spectroscopy into clinical research and practice during routine endoscopic inspections.
Legemate, Jaap D; Kamphuis, Guido M; Freund, Jan Erik; Baard, Joyce; Zanetti, Stefano P; Catellani, Michele; Oussoren, Harry W; de la Rosette, Jean J
2018-03-10
Flexible ureteroscopy is an established treatment modality for evaluating and treating abnormalities in the upper urinary tract. Reusable ureteroscope (USC) durability is a significant concern. To evaluate the durability of the latest generation of digital and fiber optic reusable flexible USCs and the factors affecting it. Six new flexible USCs from Olympus and Karl Storz were included. The primary endpoint for each USC was its first repair. Data on patient and treatment characteristics, accessory device use, ureteroscopy time, image quality, USC handling, disinfection cycles, type of damage, and deflection loss were collected prospectively. Ureteroscopy. USC durability was measured as the total number of uses and ureteroscopy time before repair. USC handling and image quality were scored. After every procedure, maximal ventral and dorsal USC deflection were documented on digital images. A total of 198 procedures were performed. The median number of procedures was 27 (IQR 16-48; 14h) for the six USCs overall, 27 (IQR 20-56; 14h) for the digital USCs, and 24 (range 10-37; 14h) for the fiber optic USCs. Image quality remained high throughout the study for all six USCs. USC handling and the range of deflection remained good under incremental use. Damage to the distal part of the shaft and shaft coating was the most frequent reason for repair, and was related to intraoperative manual forcing. A limitation of this study is its single-center design. The durability of the latest reusable flexible USCs in the current study was limited to 27 uses (14h). Damage to the flexible shaft was the most important limitation to the durability of the USCs evaluated. Prevention of intraoperative manual forcing of flexible USCs maximizes their overall durability. Current flexible ureteroscopes proved to be durable. Shaft vulnerability was the most important limiting factor affecting durability. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794
2015-02-15
Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Whole surface image reconstruction for machine vision inspection of fruit
NASA Astrophysics Data System (ADS)
Reese, D. Y.; Lefcourt, A. M.; Kim, M. S.; Lo, Y. M.
2007-09-01
Automated imaging systems offer the potential to inspect the quality and safety of fruits and vegetables consumed by the public. Current automated inspection systems allow fruit such as apples to be sorted for quality issues including color and size by looking at a portion of the surface of each fruit. However, to inspect for defects and contamination, the whole surface of each fruit must be imaged. The goal of this project was to develop an effective and economical method for whole surface imaging of apples using mirrors and a single camera. Challenges include mapping the concave stem and calyx regions. To allow the entire surface of an apple to be imaged, apples were suspended or rolled above the mirrors using two parallel music wires. A camera above the apples captured 90 images per sec (640 by 480 pixels). Single or multiple flat or concave mirrors were mounted around the apple in various configurations to maximize surface imaging. Data suggest that the use of two flat mirrors provides inadequate coverage of a fruit but using two parabolic concave mirrors allows the entire surface to be mapped. Parabolic concave mirrors magnify images, which results in greater pixel resolution and reduced distortion. This result suggests that a single camera with two parabolic concave mirrors can be a cost-effective method for whole surface imaging.
Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B
2013-11-01
A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, E. R., E-mail: ewhite@physics.ucla.edu; Kerelsky, Alexander; Hubbard, William A.
2015-11-30
Heterostructure devices with specific and extraordinary properties can be fabricated by stacking two-dimensional crystals. Cleanliness at the inter-crystal interfaces within a heterostructure is crucial for maximizing device performance. However, because these interfaces are buried, characterizing their impact on device function is challenging. Here, we show that electron-beam induced current (EBIC) mapping can be used to image interfacial contamination and to characterize the quality of buried heterostructure interfaces with nanometer-scale spatial resolution. We applied EBIC and photocurrent imaging to map photo-sensitive graphene-MoS{sub 2} heterostructures. The EBIC maps, together with concurrently acquired scanning transmission electron microscopy images, reveal how a device's photocurrentmore » collection efficiency is adversely affected by nanoscale debris invisible to optical-resolution photocurrent mapping.« less
The Dark Energy Survey Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Morganson, E.; Gruendl, R. A.; Menanteau, F.; Carrasco Kind, M.; Chen, Y.-C.; Daues, G.; Drlica-Wagner, A.; Friedel, D. N.; Gower, M.; Johnson, M. W. G.; Johnson, M. D.; Kessler, R.; Paz-Chinchón, F.; Petravick, D.; Pond, C.; Yanny, B.; Allam, S.; Armstrong, R.; Barkhouse, W.; Bechtol, K.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Buckley-Geer, E.; Covarrubias, R.; Desai, S.; Diehl, H. T.; Goldstein, D. A.; Gruen, D.; Li, T. S.; Lin, H.; Marriner, J.; Mohr, J. J.; Neilsen, E.; Ngeow, C.-C.; Paech, K.; Rykoff, E. S.; Sako, M.; Sevilla-Noarbe, I.; Sheldon, E.; Sobreira, F.; Tucker, D. L.; Wester, W.; DES Collaboration
2018-07-01
The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a ∼5000 deg2 survey of the southern sky in five optical bands (g, r, i, z, Y) to a depth of ∼24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g, r, i, z) over ∼27 deg2. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.
Yang, Junhai; Caprioli, Richard M.
2011-01-01
We have employed matrix deposition by sublimation for protein image analysis on tissue sections using a hydration/recrystallization process that produces high quality MALDI mass spectra and high spatial resolution ion images. We systematically investigated different washing protocols, the effect of tissue section thickness, the amount of sublimated matrix per unit area and different recrystallization conditions. The results show that an organic solvent rinse followed by ethanol/water rinses substantially increased sensitivity for the detection of proteins. Both the thickness of tissue section and amount of sinapinic acid sublimated per unit area have optimal ranges for maximal protein signal intensity. Ion images of mouse and rat brain sections at 50, 20 and 10 µm spatial resolution are presented and are correlated with H&E stained optical images. For targeted analysis, histology directed imaging can be performed using this protocol where MS analysis and H&E staining are performed on the same section. PMID:21639088
Spectral Imaging for Intracranial Stents and Stent Lumen.
Weng, Chi-Lun; Tseng, Ying-Chi; Chen, David Yen-Ting; Chen, Chi-Jen; Hsu, Hui-Ling
2016-01-01
Application of computed tomography for monitoring intracranial stents is limited because of stent-related artifacts. Our purpose was to evaluate the effect of gemstone spectral imaging on the intracranial stent and stent lumen. In vitro, we scanned Enterprise stent phantom and a stent-cheese complex using the gemstone spectral imaging protocol. Follow-up gemstone spectral images of 15 consecutive patients with placement of Enterprise from January 2013 to September 2014 were also retrospectively reviewed. We used 70-keV, 140-keV, iodine (water), iodine (calcium), and iodine (hydroxyapatite) images to evaluate their effect on the intracranial stent and stent lumen. Two regions of interest were individually placed in stent lumen and adjacent brain tissue. Contrast-to-noise ratio was measured to determine image quality. The maximal diameter of stent markers was also measured to evaluate stent-related artifact. Two radiologists independently graded the visibility of the lumen at the maker location by using a 4-point scale. The mean of grading score, contrast/noise ratio and maximal diameter of stent markers were compared among all modes. All results were analyzed by SPSS version 20. In vitro, iodine (water) images decreased metallic artifact of stent makers to the greatest degree. The most areas of cheese were observed on iodine (water) images. In vivo, iodine (water) images had the smallest average diameter of stent markers (0.33 ± 0.17mm; P < .05) and showed the highest mean grading score (2.94 ± 0.94; P < .05) and contrast/noise ratio of in-stent lumen (160.03 ±37.79; P < .05) among all the modes. Iodine (water) images can help reduce stent-related artifacts of Enterprise and enhance contrast of in-stent lumen. Spectral imaging may be considered a noninvasive modality for following-up patients with in-stent stenosis.
Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.
2015-01-01
Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
NASA Astrophysics Data System (ADS)
Wong, Wai-Hoi; Li, Hongdi; Zhang, Yuxuan; Ramirez, Rocio; An, Shaohui; Wang, Chao; Liu, Shitao; Dong, Yun; Baghaei, Hossain
2015-10-01
We developed a high-resolution Photomultiplier-Quadrant-Sharing (PQS) PET system for human imaging. This system is made up of 24 detector panels. Each panel (bank) consists of 3 ×7 detector blocks, and each block has 16 ×16 LYSO crystals of 2.35 ×2.35 ×15.2 mm3. We used a novel detector-grinding scheme that is compatible with the PQS detector-pixel-decoding requirements to make a gapless cylindrical detector ring for maximizing detection efficiency while delivering an ultrahigh spatial-resolution for a whole-body PET camera with a ring diameter of 87 cm and axial field of view of 27.6 cm. This grinding scheme enables two adjacent gapless panels to share one row of the PMTs to extend the PQS configuration beyond one panel and thus maximize the economic benefit (in PMT usage) of the PQS design. The entire detector ring has 129,024 crystals, all of which are clearly decoded using only 576 PMTs (38-mm diameter). Thus, each PMT on average decodes 224 crystals to achieve a high crystal-pitch resolution of 2.44 mm ×2.44 mm. The detector blocks were mass-produced with our slab-sandwich-slice technique using a set of optimized mirror-film patterns (between crystals) to maximize light output and achieve high spatial and timing resolution. This detection system with time-of-flight capability was placed in a human PET/CT gantry. The reconstructed image resolution of the system was about 2.87 mm using 2D-filtered back-projection. The time-of-flight resolution was 473 ps. The preliminary images of phantoms and clinical studies presented in this work demonstrate the capability of this new PET/CT system to produce high-quality images.
Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang
Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less
Novel Developments in Instrumentation for PET Imaging
NASA Astrophysics Data System (ADS)
Karp, Joel
2013-04-01
Advances in medical imaging, in particular positron emission tomography (PET), have been based on technical developments in physics and instrumentation that have common foundations with detection systems used in other fields of physics. New detector materials are used in PET systems that maximize efficiency, timing characteristics and robustness, and which lead to improved image quality and quantitative accuracy for clinical imaging. Time of flight (TOF) techniques are now routinely used in commercial PET scanners that combine physiological imaging with anatomical imaging provided by x-ray computed tomography. Using new solid-state photo-sensors instead of traditional photo-multiplier tubes makes it possible to combine PET with magnetic resonance imaging which is a significant technical challenge, but one that is creating new opportunities for both research and clinical applications. An overview of recent advances in instrumentation, such as TOF and PET/MR will be presented, along with examples of imaging studies to demonstrate the impact on patient care and basic research of diseases.
Kishimoto, Junichi; Ohta, Yasutoshi; Kitao, Shinichiro; Watanabe, Tomomi; Ogawa, Toshihide
2018-04-01
Single-source dual-energy CT (ssDECT) allows the reconstruction of iodine density images (IDIs) from projection based computing. We hypothesized that adding adaptive statistical iterative reconstruction (ASiR) could improve image quality. The aim of our study was to evaluate the effect and determine the optimal blend percentages of ASiR for IDI of myocardial late iodine enhancement (LIE) in the evaluation of chronic myocardial infarction using ssDECT. A total of 28 patients underwent cardiac LIE using a ssDECT scanner. IDIs between 0 and 100% of ASiR contributions in 10% increments were reconstructed. The signal-to-noise ratio (SNR) of remote myocardia and the contrast-to-noise ratio (CNR) of infarcted myocardia were measured. Transmural extent of infarction was graded using a 5-point scale. The SNR, CNR, and transmural extent were assessed for each ASiR contribution ratio. The transmural extents were compared with MRI as a reference standard. Compared to 0% ASiR, the use of 20-100% ASiR resulted in a reduction of image noise (p < 0.01) without significant differences in the signal. Compared with 0% ASiR images, reconstruction with 100% ASiR image showed the highest improvement in SNR (229%; p < 0.001) and CNR (199%; p < 0.001). ASiR above 80% showed the highest ratio (73.7%) of accurate transmural extent classification. In conclusion, ASiR intensity of 80-100% in IDIs can improve image quality without changes in signal and maximizes the accuracy of transmural extent in infarcted myocardium.
Kajimura, Junko; Ito, Reiko; Manley, Nancy R; Hale, Laura P
2016-02-01
Performance of immunofluorescence staining on archival formalin-fixed paraffin-embedded human tissues is generally not considered to be feasible, primarily due to problems with tissue quality and autofluorescence. We report the development and application of procedures that allowed for the study of a unique archive of thymus tissues derived from autopsies of individuals exposed to atomic bomb radiation in Hiroshima, Japan in 1945. Multiple independent treatments were used to minimize autofluorescence and maximize fluorescent antibody signals. Treatments with NH3/EtOH and Sudan Black B were particularly useful in decreasing autofluorescent moieties present in the tissue. Deconvolution microscopy was used to further enhance the signal-to-noise ratios. Together, these techniques provide high-quality single- and dual-color fluorescent images with low background and high contrast from paraffin blocks of thymus tissue that were prepared up to 60 years ago. The resulting high-quality images allow the application of a variety of image analyses to thymus tissues that previously were not accessible. Whereas the procedures presented remain to be tested for other tissue types and archival conditions, the approach described may facilitate greater utilization of older paraffin block archives for modern immunofluorescence studies. © 2016 The Histochemical Society.
Troy, Karen L; Edwards, W Brent
2018-05-01
Quantitative CT (QCT) analysis involves the calculation of specific parameters such as bone volume and density from CT image data, and can be a powerful tool for understanding bone quality and quantity. However, without careful attention to detail during all steps of the acquisition and analysis process, data can be of poor- to unusable-quality. Good quality QCT for research requires meticulous attention to detail and standardization of all aspects of data collection and analysis to a degree that is uncommon in a clinical setting. Here, we review the literature to summarize practical and technical considerations for obtaining high quality QCT data, and provide examples of how each recommendation affects calculated variables. We also provide an overview of the QCT analysis technique to illustrate additional opportunities to improve data reproducibility and reliability. Key recommendations include: standardizing the scanner and data acquisition settings, minimizing image artifacts, selecting an appropriate reconstruction algorithm, and maximizing repeatability and objectivity during QCT analysis. The goal of the recommendations is to reduce potential sources of error throughout the analysis, from scan acquisition to the interpretation of results. Copyright © 2018 Elsevier Inc. All rights reserved.
Side information in coded aperture compressive spectral imaging
NASA Astrophysics Data System (ADS)
Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.
2017-02-01
Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.
Design of a concise Féry-prism hyperspectral imaging system based on multi-configuration
NASA Astrophysics Data System (ADS)
Dong, Wei; Nie, Yun-feng; Zhou, Jin-song
2013-08-01
In order to meet the needs of space borne and airborne hyperspectral imaging system for light weight, simplification and high spatial resolution, a novel design of Féry-prism hyperspectral imaging system based on Zemax multi-configuration method is presented. The novel structure is well arranged by analyzing optical monochromatic aberrations theoretically, and the optical structure of this design is concise. The fundamental of this design is Offner relay configuration, whereas the secondary mirror is replaced by Féry-prism with curved surfaces and a reflective front face. By reflection, the light beam passes through the Féry-prism twice, which promotes spectral resolution and enhances image quality at the same time. The result shows that the system can achieve light weight and simplification, compared to other hyperspectral imaging systems. Composed of merely two spherical mirrors and one achromatized Féry-prism to perform both dispersion and imaging functions, this structure is concise and compact. The average spectral resolution is 6.2nm; The MTFs for 0.45~1.00um spectral range are greater than 0.75, RMSs are less than 2.4um; The maximal smile is less than 10% pixel, while the keystones is less than 2.8% pixel; image quality approximates the diffraction limit. The design result shows that hyperspectral imaging system with one modified Féry-prism substituting the secondary mirror of Offner relay configuration is feasible from the perspective of both theory and practice, and possesses the merits of simple structure, convenient optical alignment, and good image quality, high resolution in space and spectra, adjustable dispersive nonlinearity. The system satisfies the requirements of airborne or space borne hyperspectral imaging system.
NASA Astrophysics Data System (ADS)
Sabol, John M.; Avinash, Gopal B.; Nicolas, Francois; Claus, Bernhard E. H.; Zhao, Jianguo; Dobbins, James T., III
2001-06-01
Dual-energy subtraction imaging increases the sensitivity and specificity of pulmonary nodule detection in chest radiography by reducing the contrast of overlying bone structures. Recent development of a fast, high-efficiency detector enables dual-energy imaging to be integrated into the traditional workflow. We have modified a GE RevolutionTM XQ/i chest imaging system to construct a dual-energy imaging prototype system. Here we describe the operating characteristics of this prototype and evaluate image quality. Empirical results show that the dual-energy CNR is maximized if the dose is approximately equal for both high and low energy exposures. Given the high detector DQE, and allocation of dose between the two views, we can acquire dual-energy PA and conventional lateral images with total dose equivalent to a conventional two-view film chest exam. Calculations have shown that the dual-exposure technique has superior CNR and tissue cancellation than single-exposure CR systems. Clinical images obtained on a prototype dual-energy imaging system show excellent tissue contrast cancellation, low noise, and modest motion artefacts. In summary, a prototype dual-energy system has been constructed which enables rapid, dual-exposure imaging of the chest using a commercially available high-efficiency, flat-panel x-ray detector. The quality of the clinical images generated with this prototype exceeds that of CR techniques and demonstrates the potential for improved detection and characterization of lung disease through dual-energy imaging.
Improving image quality in laboratory x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.
2017-03-01
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
Can endurance training improve physical capacity and quality of life in young Fontan patients?
Hedlund, Eva R; Lundell, Bo; Söderström, Liselott; Sjöberg, Gunnar
2018-03-01
Children after Fontan palliation have reduced exercise capacity and quality of life. Our aim was to study whether endurance training could improve physical capacity and quality of life in Fontan patients. Fontan patients (n=30) and healthy age- and gender-matched control subjects (n=25) performed a 6-minute walk test at submaximal capacity and a maximal cycle ergometer test. Quality of life was assessed with Pediatric Quality of Life Inventory Version 4.0 questionnaires for children and parents. All tests were repeated after a 12-week endurance training programme and after 1 year. Patients had decreased submaximal and maximal exercise capacity (maximal oxygen uptake 35.0±5.1 ml/minute per·kg versus 43.7±8.4 ml/minute·per·kg, p<0.001) and reported a lower quality of life score (70.9±9.9 versus 85.7±8.0, p<0.001) than controls. After training, patients improved their submaximal exercise capacity in a 6-minute walk test (from 590.7±65.5 m to 611.8±70.9 m, p<0.05) and reported a higher quality of life (p<0.01), but did not improve maximal exercise capacity. At follow-up, submaximal exercise capacity had increased further and improved quality of life was sustained. The controls improved their maximal exercise capacity (p<0.05), but not submaximal exercise capacity or quality of life after training. At follow-up, improvement of maximal exercise capacity was sustained. We believe that an individualised endurance training programme for Fontan patients improves submaximal exercise capacity and quality of life in Fontan patients and the effect on quality of life appears to be long-lasting.
The Dark Energy Survey Image Processing Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morganson, E.; et al.
The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On amore » bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.« less
Ocean wavenumber estimation from wave-resolving time series imagery
Plant, N.G.; Holland, K.T.; Haller, M.C.
2008-01-01
We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.
An automated method for tracking clouds in planetary atmospheres
NASA Astrophysics Data System (ADS)
Luz, D.; Berry, D. L.; Roos-Serote, M.
2008-05-01
We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.
Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R
2007-01-01
The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).
Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.
Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao
2017-01-01
Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.
NASA Astrophysics Data System (ADS)
Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu
2014-01-01
Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.
Novel multimodality segmentation using level sets and Jensen-Rényi divergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva
2013-12-15
Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence.
Markel, Daniel; Zaidi, Habib; El Naqa, Issam
2013-12-01
Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.
Güldner, C; Ningo, A; Voigt, J; Diogo, I; Heinrichs, J; Weber, R; Wilhelm, T; Fiebich, M
2013-03-01
More than 10 years ago, cone-beam-computed tomography (CBCT) was introduced in ENT radiology. Until now, the focus of research was to evaluate clinical limits of this technique. The aim of this work is the evaluation of specific dosages and the identification of potential optimization in the performance of CBCT of the paranasal sinuses. Based on different tube parameters (tube current, tube voltage, and rotation angles), images of the nose and the paranasal sinuses were taken on a phantom head with the Accu-I-tomo F17 (Morita, Kyoto, Japan). The dosages applied to the lens and parotid gland were measured with OSL dosimetry. The imaging quality was evaluated by independent observers. All datasets were reviewed according to a checklist of surgically important anatomic structures. Even for lowest radiation exposure (4 mA, 76 kV, 180°, computed tomography dosage index (CTDI) = 1.8 mGy), the imaging quality was sufficient. Of course a significant reduction of the imaging quality could be seen, so a reliable mean was set for 4 mA, 84 kV, and 180° rotation angle (CTDI = 2.4 mGy). In this combination, a reduction of 92 % in lens-dose and of 77 % of dosage at the parotid gland was observed in comparison to the maximal possible adjustments (8 mA, 90 kV, 360°, CTDI = 10.9 mGy). There is potential for optimization in CBCT. Changing the rotation angle (180° instead of 360°) leads to a dose reduction of 50 %. Furthermore from clinical point of view in case of chronic rhinosinusitis a relevant reduction of dosage is possible. Therefore, it is necessary to intensify the interdisciplinary discussion about the disease specifics required quality of imaging.
FIR filters for hardware-based real-time multi-band image blending
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Leblebici, Yusuf
2015-02-01
Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya
2017-01-01
We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu Weigang; Graff, Pierre; Boettger, Thomas
2011-04-15
Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generatedmore » based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.« less
Security screening via computational imaging using frequency-diverse metasurface apertures
NASA Astrophysics Data System (ADS)
Smith, David R.; Reynolds, Matthew S.; Gollub, Jonah N.; Marks, Daniel L.; Imani, Mohammadreza F.; Yurduseven, Okan; Arnitz, Daniel; Pedross-Engel, Andreas; Sleasman, Timothy; Trofatter, Parker; Boyarsky, Michael; Rose, Alec; Odabasi, Hayrettin; Lipworth, Guy
2017-05-01
Computational imaging is a proven strategy for obtaining high-quality images with fast acquisition rates and simpler hardware. Metasurfaces provide exquisite control over electromagnetic fields, enabling the radiated field to be molded into unique patterns. The fusion of these two concepts can bring about revolutionary advances in the design of imaging systems for security screening. In the context of computational imaging, each field pattern serves as a single measurement of a scene; imaging a scene can then be interpreted as estimating the reflectivity distribution of a target from a set of measurements. As with any computational imaging system, the key challenge is to arrive at a minimal set of measurements from which a diffraction-limited image can be resolved. Here, we show that the information content of a frequency-diverse metasurface aperture can be maximized by design, and used to construct a complete millimeter-wave imaging system spanning a 2 m by 2 m area, consisting of 96 metasurfaces, capable of producing diffraction-limited images of human-scale targets. The metasurfacebased frequency-diverse system presented in this work represents an inexpensive, but tremendously flexible alternative to traditional hardware paradigms, offering the possibility of low-cost, real-time, and ubiquitous screening platforms.
USDA-ARS?s Scientific Manuscript database
To maximize profitability, cotton (GossypiumhirsutumL.) producers must attempt to control the quality of the crop while maximizing yield. The objective of this research was to measure the intrinsic variability present in cotton fiber yield and quality. The 0.5-ha experimental site was located in a...
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaskowiak, J; Ahmad, S; Ali, I
Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were usedmore » to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased significantly and that limited image quality and poor correlation between the motion amplitude and DVF was obtained.« less
Applications and challenges of digital pathology and whole slide imaging.
Higgins, C
2015-07-01
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naseri, M; Rajabi, H; Wang, J
Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Reconstruction of Sky Illumination Domes from Ground-Based Panoramas
NASA Astrophysics Data System (ADS)
Coubard, F.; Lelégard, L.; Brédif, M.; Paparoditis, N.; Briottet, X.
2012-07-01
The knowledge of the sky illumination is important for radiometric corrections and for computer graphics applications such as relighting or augmented reality. We propose an approach to compute environment maps, representing the sky radiance, from a set of ground-based images acquired by a panoramic acquisition system, for instance a mobile-mapping system. These images can be affected by important radiometric artifacts, such as bloom or overexposure. A Perez radiance model is estimated with the blue sky pixels of the images, and used to compute additive corrections in order to reduce these radiometric artifacts. The sky pixels are then aggregated in an environment map, which still suffers from discontinuities on stitching edges. The influence of the quality of estimated sky radiance on the simulated light signal is measured quantitatively on a simple synthetic urban scene; in our case, the maximal error for the total sensor radiance is about 10%.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
Shirzadi, Zahra; Crane, David E; Robertson, Andrew D; Maralani, Pejman J; Aviv, Richard I; Chappell, Michael A; Goldstein, Benjamin I; Black, Sandra E; MacIntosh, Bradley J
2015-11-01
To evaluate the impact of rejecting intermediate cerebral blood flow (CBF) images that are adversely affected by head motion during an arterial spin labeling (ASL) acquisition. Eighty participants were recruited, representing a wide age range (14-90 years) and heterogeneous cerebrovascular health conditions including bipolar disorder, chronic stroke, and moderate to severe white matter hyperintensities of presumed vascular origin. Pseudocontinuous ASL and T1 -weigthed anatomical images were acquired on a 3T scanner. ASL intermediate CBF images were included based on their contribution to the mean estimate, with the goal to maximize CBF detectability in gray matter (GM). Simulations were conducted to evaluate the performance of the proposed optimization procedure relative to other ASL postprocessing approaches. Clinical CBF images were also assessed visually by two experienced neuroradiologists. Optimized CBF images (CBFopt ) had significantly greater agreement with a synthetic ground truth CBF image and greater CBF detectability relative to the other ASL analysis methods (P < 0.05). Moreover, empirical CBFopt images showed a significantly improved signal-to-noise ratio relative to CBF images obtained from other postprocessing approaches (mean: 12.6%; range 1% to 56%; P < 0.001), and this improvement was age-dependent (P = 0.03). Differences between CBF images from different analysis procedures were not perceptible by visual inspection, while there was a moderate agreement between the ratings (κ = 0.44, P < 0.001). This study developed an automated head motion threshold-free procedure to improve the detection of CBF in GM. The improvement in CBF image quality was larger when considering older participants. © 2015 Wiley Periodicals, Inc.
SU-E-I-11: A New Cone-Beam CT System for Bedside Head Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, H; Zeng, W; Xu, P
Purpose: To design and develop a new mobile cone-beam CT (CBCT) system for head imaging with good soft-tissue visibility, to be used bedside in ICU and neurosurgery department to monitor treatment and operation outcome in brain patients. Methods: The imaging chain consists of a 30cmx25cm amorphous silicon flat panel detector and a pulsed, stationary anode monoblock x-ray source of 100kVp at a maximal tube current of 10mA. The detector and source are supported on motorized mechanisms to provide detector lateral shift and source angular tilt, enabling a centered digital radiographic imaging mode and half-fan CBCT, while maximizing the use ofmore » the x-ray field and keep the source to detector distance short. A focused linear anti-scatter grid is mounted on the detector, and commercial software with scatter and other corrective algorithms is used for data processing and image reconstruction. The gantry rotates around a horizontal axis, and is able to adjust its height for different patient table positions. Cables are routed through a custom protective sleeve over a large bore with an in-plane twister band, facilitating single 360-degree rotation without a slip-ring at a speed up to 5 seconds per rotation. A UPS provides about 10 minutes of operation off the battery when unplugged. The gantry is on locked casters, whose brake is control by two push handles on both sides for easy reposition. The entire system is designed to have a light weight and a compact size for excellent maneuverability. Results: System design is complete and main imaging components are tested. Initial results will be presented and discussed later in the presentation. Conclusion: A new mobile CBCT system for head imaging is being developed. With its compact size, a large bore, and quality design, it is expected to be a useful imaging tool for bedside uses. The work is supported by a grant from Chinese Academy of Sciences.« less
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
High Resolution Doppler Imager
NASA Technical Reports Server (NTRS)
Hays, Paul B.
1999-01-01
This report summarizes the accomplishments of the High Resolution Doppler Imager (HRDI) on UARS spacecraft during the period 4/l/96 - 3/31/99. During this period, HRDI operation, data processing, and data analysis continued, and there was a high level of vitality in the HRDI project. The HRDI has been collecting data from the stratosphere, mesosphere, and lower thermosphere since instrument activation on October 1, 1991. The HRDI team has stressed three areas since operations commenced: 1) operation of the instrument in a manner which maximizes the quality and versatility of the collected data; 2) algorithm development and validation to produce a high-quality data product; and 3) scientific studies, primarily of the dynamics of the middle atmosphere. There has been no significant degradation in the HRDI instrument since operations began nearly 8 years ago. HRDI operations are fairly routine, although we have continued to look for ways to improve the quality of the scientific product, either by improving existing modes, or by designing new ones. The HRDI instrument has been programmed to collect data for new scientific studies, such as measurements of fluorescence from plants, measuring cloud top heights, and lower atmosphere H2O.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
NASA Astrophysics Data System (ADS)
Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.
1992-08-01
Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure
Avnet, Hagai; Mazaaki, Eyal; Shen, Ori; Cohen, Sarah; Yagel, Simcha
2016-01-01
We aimed to evaluate the use of spatiotemporal image correlation (STIC) as a tool for training nonexpert examiners to perform screening examinations of the fetal heart by acquiring and examining STIC volumes according to a standardized questionnaire based on the 5 transverse planes of the fetal heart. We conducted a prospective study at 2 tertiary care centers. Two sonographers without formal training in fetal echocardiography received theoretical instruction on the 5 fetal echocardiographic transverse planes, as well as STIC technology. Only women with conditions allowing 4-dimensional STIC volume acquisitions (grayscale and Doppler) were included in the study. Acquired volumes were evaluated offline according to a standardized protocol that required the trainee to mark 30 specified structures on 5 required axial planes. Volumes were then reviewed by an expert examiner for quality of acquisition and correct identification of specified structures. Ninety-six of 112 pregnant women examined entered the study. Patients had singleton pregnancies between 20 and 32 weeks' gestation. After an initial learning curve of 20 examinations, trainees succeeded in identifying 97% to 98% of structures, with a highly significant degree of agreement with the expert's analysis (P < .001). A median of 2 STIC volumes for each examination was necessary for maximal structure identification. Acquisition quality scores were high (8.6-8.7 of a maximal score of 10) and were found to correlate with identification rates (P = .017). After an initial learning curve and under expert guidance, STIC is an excellent tool for trainees to master extended screening examinations of the fetal heart.
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel
2012-02-13
Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.
Bois, John P; Geske, Jeffrey B; Foley, Thomas A; Ommen, Steve R; Pellikka, Patricia A
2017-02-15
Left ventricular (LV) wall thickness is a prognostic marker in hypertrophic cardiomyopathy (HC). LV wall thickness ≥30 mm (massive hypertrophy) is independently associated with sudden cardiac death. Presence of massive hypertrophy is used to guide decision making for cardiac defibrillator implantation. We sought to determine whether measurements of maximal LV wall thickness differ between cardiac magnetic resonance imaging (MRI) and transthoracic echocardiography (TTE). Consecutive patients were studied who had HC without previous septal ablation or myectomy and underwent both cardiac MRI and TTE at a single tertiary referral center. Reported maximal LV wall thickness was compared between the imaging techniques. Patients with ≥1 technique reporting massive hypertrophy received subset analysis. In total, 618 patients were evaluated from January 1, 2003, to December 21, 2012 (mean [SD] age, 53 [15] years; 381 men [62%]). In 75 patients (12%), reported maximal LV wall thickness was identical between MRI and TTE. Median difference in reported maximal LV wall thickness between the techniques was 3 mm (maximum difference, 17 mm). Of the 63 patients with ≥1 technique measuring maximal LV wall thickness ≥30 mm, 44 patients (70%) had discrepant classification regarding massive hypertrophy. MRI identified 52 patients (83%) with massive hypertrophy; TTE, 30 patients (48%). Although guidelines recommend MRI or TTE imaging to assess cardiac anatomy in HC, this study shows discrepancy between the techniques for maximal reported LV wall thickness assessment. In conclusion, because this measure clinically affects prognosis and therapeutic decision making, efforts to resolve these discrepancies are critical. Copyright © 2016 Elsevier Inc. All rights reserved.
Multiresolution texture models for brain tumor segmentation in MRI.
Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir
2011-01-01
In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.
Objects Grouping for Segmentation of Roads Network in High Resolution Images of Urban Areas
NASA Astrophysics Data System (ADS)
Maboudi, M.; Amini, J.; Hahn, M.
2016-06-01
Updated road databases are required for many purposes such as urban planning, disaster management, car navigation, route planning, traffic management and emergency handling. In the last decade, the improvement in spatial resolution of VHR civilian satellite sensors - as the main source of large scale mapping applications - was so considerable that GSD has become finer than size of common urban objects of interest such as building, trees and road parts. This technological advancement pushed the development of "Object-based Image Analysis (OBIA)" as an alternative to pixel-based image analysis methods. Segmentation as one of the main stages of OBIA provides the image objects on which most of the following processes will be applied. Therefore, the success of an OBIA approach is strongly affected by the segmentation quality. In this paper, we propose a purpose-dependent refinement strategy in order to group road segments in urban areas using maximal similarity based region merging. For investigations with the proposed method, we use high resolution images of some urban sites. The promising results suggest that the proposed approach is applicable in grouping of road segments in urban areas.
Mittag, U.; Kriechbaumer, A.; Rittweger, J.
2017-01-01
The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
Dedicated Cone-Beam CT System for Extremity Imaging
Al Muhit, Abdullah; Zbijewski, Wojciech; Thawait, Gaurav K.; Stayman, J. Webster; Packard, Nathan; Senn, Robert; Yang, Dong; Foos, David H.; Yorkston, John; Siewerdsen, Jeffrey H.
2014-01-01
Purpose To provide initial assessment of image quality and dose for a cone-beam computed tomographic (CT) scanner dedicated to extremity imaging. Materials and Methods A prototype cone-beam CT scanner has been developed for imaging the extremities, including the weight-bearing lower extremities. Initial technical assessment included evaluation of radiation dose measured as a function of kilovolt peak and tube output (in milliampere seconds), contrast resolution assessed in terms of the signal difference–to-noise ratio (SDNR), spatial resolution semiquantitatively assessed by using a line-pair module from a phantom, and qualitative evaluation of cadaver images for potential diagnostic value and image artifacts by an expert CT observer (musculoskeletal radiologist). Results The dose for a nominal scan protocol (80 kVp, 108 mAs) was 9 mGy (absolute dose measured at the center of a CT dose index phantom). SDNR was maximized with the 80-kVp scan technique, and contrast resolution was sufficient for visualization of muscle, fat, ligaments and/or tendons, cartilage joint space, and bone. Spatial resolution in the axial plane exceeded 15 line pairs per centimeter. Streaks associated with x-ray scatter (in thicker regions of the patient—eg, the knee), beam hardening (about cortical bone—eg, the femoral shaft), and cone-beam artifacts (at joint space surfaces oriented along the scanning plane—eg, the interphalangeal joints) presented a slight impediment to visualization. Cadaver images (elbow, hand, knee, and foot) demonstrated excellent visibility of bone detail and good soft-tissue visibility suitable to a broad spectrum of musculoskeletal indications. Conclusion A dedicated extremity cone-beam CT scanner capable of imaging upper and lower extremities (including weight-bearing examinations) provides sufficient image quality and favorable dose characteristics to warrant further evaluation for clinical use. © RSNA, 2013 Online supplemental material is available for this article. PMID:24475803
Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.
Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P
2016-12-01
Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively. Regularization with smoothing priors could suppress these noise patterns at the cost of reduced image contrast. The mean N% was 6.4% and 6.8% for low count QSP and MRP MAP reconstructed images. Alternatively, regularization with an anatomical Bowhser prior resulted in sharp images with high contrast, limited image distortion, and low N% of 8.3% in low count images, although some image artifacts did occur. Analysis of clinical images suggested that the same effects occur in clinical imaging. Image quality of low count SPECT acquisitions reconstructed with modern 3DOSEM algorithms is deteriorated by the occurrence of correlated noise patterns and image distortions. The artifacts observed in the phantom experiments can also occur in clinical imaging. Copyright © 2015. Published by Elsevier GmbH.
Unsupervised color image segmentation using a lattice algebra clustering technique
NASA Astrophysics Data System (ADS)
Urcid, Gonzalo; Ritter, Gerhard X.
2011-08-01
In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.
Optimization of single shot 3D breath-hold non-enhanced MR angiography of the renal arteries.
Tan, Huan; Koktzoglou, Ioannis; Glielmi, Christopher; Galizia, Mauricio; Edelman, Robert R
2012-05-19
Cardiac and navigator-gated, inversion-prepared non-enhanced magnetic resonance angiography techniques can accurately depict the renal arteries without the need for contrast administration. However, the scan time and effectiveness of navigator-gated techniques depend on the subject respiratory pattern, which at times results in excessively prolonged scan times or suboptimal image quality. A single-shot 3D magnetization-prepared steady-state free precession technique was implemented to allow the full extent of the renal arteries to be depicted within a single breath-hold. Technical optimization of the breath-hold technique was performed with fourteen healthy volunteers. An alternative magnetization preparation scheme was tested to maximize inflow signal. Quantitative and qualitative comparisons were made between the breath-hold technique and the clinically accepted navigator-gated technique in both volunteers and patients on a 1.5 T scanner. The breath-hold technique provided an average of seven fold reduction in imaging time, without significant loss of image quality. Comparable single-to-noise and contrast-to-noise ratios of intra- and extra-renal arteries were found between the breath-hold and the navigator-gated techniques in volunteers. Furthermore, the breath-hold technique demonstrated good image quality for diagnostic purposes in a small number of patients in a pilot study. The single-shot, breath-hold technique offers an alternative to navigator-gated methods for non-enhanced renal magnetic resonance angiography. The initial results suggest a potential supplementary clinical role for the breath-hold technique in the evaluation of suspected renal artery diseases.
Comparison of TOF-PET and Bremsstrahlung SPECT Images of Yttrium-90: A Monte Carlo Simulation Study.
Takahashi, Akihiko; Himuro, Kazuhiko; Baba, Shingo; Yamashita, Yasuo; Sasaki, Masayuki
2018-01-01
Yttrium-90 ( 90 Y) is a beta particle nuclide used in targeted radionuclide therapy which is available to both single-photon emission computed tomography (SPECT) and time-of-flight (TOF) positron emission tomography (PET) imaging. The purpose of this study was to assess the image quality of PET and Bremsstrahlung SPECT by simulating PET and SPECT images of 90 Y using Monte Carlo simulation codes under the same conditions and to compare them. In-house Monte Carlo codes, MCEP-PET and MCEP-SPECT, were employed to simulate images. The phantom was a torso-shaped phantom containing six hot spheres of various sizes. The background concentrations of 90 Y were set to 50, 100, 150, and 200 kBq/mL, and the concentrations of the hot spheres were 10, 20, and 40 times of those of the background concentrations. The acquisition time was set to 30 min, and the simulated sinogram data were reconstructed using the ordered subset expectation maximization method. The contrast recovery coefficient (CRC) and contrast-to-noise ratio (CNR) were employed to evaluate the image qualities. The CRC values of SPECT images were less than 40%, while those of PET images were more than 40% when the hot sphere was larger than 20 mm in diameter. The CNR values of PET images of hot spheres of diameter smaller than 20 mm were larger than those of SPECT images. The CNR values mostly exceeded 4, which is a criterion to evaluate the discernibility of hot areas. In the case of SPECT, hot spheres of diameter smaller than 20 mm were not discernable. On the contrary, the CNR values of PET images decreased to the level of SPECT, in the case of low concentration. In almost all the cases examined in this investigation, the quantitative indexes of TOF-PET 90 Y images were better than those of Bremsstrahlung SPECT images. However, the superiority of PET image became critical in the case of low activity concentrations.
Model-based tomographic reconstruction of objects containing known components.
Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H
2012-10-01
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
Bunck, Alexander C; Jüttner, Alena; Kröger, Jan Robert; Burg, Matthias C; Kugel, Harald; Niederstadt, Thomas; Tiemann, Klaus; Schnackenburg, Bernhard; Crelier, Gerard R; Heindel, Walter; Maintz, David
2012-09-01
4D phase contrast flow imaging is increasingly used to study the hemodynamics in various vascular territories and pathologies. The aim of this study was to assess the feasibility and validity of MRI based 4D phase contrast flow imaging for the evaluation of in-stent blood flow in 17 commonly used peripheral stents. 17 different peripheral stents were implanted into a MR compatible flow phantom. In-stent visibility, maximal velocity and flow visualization were assessed and estimates of in-stent patency obtained from 4D phase contrast flow data sets were compared to a conventional 3D contrast-enhanced magnetic resonance angiography (CE-MRA) as well as 2D PC flow measurements. In all but 3 of the tested stents time-resolved 3D particle traces could be visualized inside the stent lumen. Quality of 4D flow visualization and CE-MRA images depended on stent type and stent orientation relative to the magnetic field. Compared to the visible lumen area determined by 3D CE-MRA, estimates of lumen patency derived from 4D flow measurements were significantly higher and less dependent on stent type. A higher number of stents could be assessed for in-stent patency by 4D phase contrast flow imaging (n=14) than by 2D phase contrast flow imaging (n=10). 4D phase contrast flow imaging in peripheral vascular stents is feasible and appears advantageous over conventional 3D contrast-enhanced MR angiography and 2D phase contrast flow imaging. It allows for in-stent flow visualization and flow quantification with varying quality depending on stent type. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Evaluation of slice accelerations using multiband echo planar imaging at 3 Tesla
Xu, Junqian; Moeller, Steen; Auerbach, Edward J.; Strupp, John; Smith, Stephen M.; Feinberg, David A.; Yacoub, Essa; Uğurbil, Kâmil
2013-01-01
We evaluate residual aliasing among simultaneously excited and acquired slices in slice accelerated multiband (MB) echo planar imaging (EPI). No in-plane accelerations were used in order to maximize and evaluate achievable slice acceleration factors at 3 Tesla. We propose a novel leakage (L-) factor to quantify the effects of signal leakage between simultaneously acquired slices. With a standard 32-channel receiver coil at 3 Tesla, we demonstrate that slice acceleration factors of up to eight (MB = 8) with blipped controlled aliasing in parallel imaging (CAIPI), in the absence of in-plane accelerations, can be used routinely with acceptable image quality and integrity for whole brain imaging. Spectral analyses of single-shot fMRI time series demonstrate that temporal fluctuations due to both neuronal and physiological sources were distinguishable and comparable up to slice-acceleration factors of nine (MB = 9). The increased temporal efficiency could be employed to achieve, within a given acquisition period, higher spatial resolution, increased fMRI statistical power, multiple TEs, faster sampling of temporal events in a resting state fMRI time series, increased sampling of q-space in diffusion imaging, or more quiet time during a scan. PMID:23899722
A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.
Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim
2009-04-07
Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.
2016-01-01
Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425
Motorized photoacoustic tomography probe for label-free improvement in image quality
NASA Astrophysics Data System (ADS)
Sangha, Gurneet S.; Hale, Nick H.; Goergen, Craig J.
2018-02-01
One of the challenges in high-resolution in vivo lipid-based photoacoustic tomography (PAT) is improving penetration depth and signal-to-noise ratio (SNR) past subcutaneous fat absorbers. A potential solution is to create optical manipulation techniques to maximize the photon density within a region of interest. Here, we present a motorized PAT probe that is capable of tuning the depth in which light is focused, as well as substantially reducing probe-skin artifacts that can obscure image interpretation. Our PAT system consists of a Nd:YAG laser (Surelite EX, Continuum) coupled with a 40 MHz central frequency ultrasound transducer (Vevo2100, FUJIFILM Visual Sonics). This system allows us to deliver 10 Hz, 5 ns light pulses with fluence of 40 mJ/cm2 to the tissue interest and reconstruct PAT and ultrasound images with axial resolutions of 125 µm and 40 µm, respectively. The motorized PAT holder was validated by imaging a polyethylene-50 tubing embedded polyvinyl alcohol phantom and periaortic fat on apolipoprotein-E deficient mice. We used 1210 nm light for this study, as this wavelength generates PAT signal for both lipids and polyethylene-50 tubes. Ex vivo results showed a 2 mm improvement in penetration depth and in vivo experiments showed an increase in lipid SNR of at least 62%. Our PAT probe also utilizes a 7 μm aluminum filter to prevent in vivo probe-skin reflection artifacts that have been previously resolved using image post-processing techniques. Using this optimized PAT probe, we can direct light to various depths within tissue to improve image quality and prevent reflection artifacts.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Energy Minimization of Molecular Features Observed on the (110) Face of Lysozyme Crystals
NASA Technical Reports Server (NTRS)
Perozzo, Mary A.; Konnert, John H.; Li, Huayu; Nadarajah, Arunan; Pusey, Marc
1999-01-01
Molecular dynamics and energy minimization have been carried out using the program XPLOR to check the plausibility of a model lysozyme crystal surface. The molecular features of the (110) face of lysozyme were observed using atomic force microscopy (AFM). A model of the crystal surface was constructed using the PDB file 193L, and was used to simulate an AFM image. Molecule translations, van der Waals radii, and assumed AFM tip shape were adjusted to maximize the correlation coefficient between the experimental and simulated images. The highest degree of 0 correlation (0.92) was obtained with the molecules displaced over 6 A from their positions within the bulk of the crystal. The quality of this starting model, the extent of energy minimization, and the correlation coefficient between the final model and the experimental data will be discussed.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Kinetic DTI of the cervical spine: diffusivity changes in healthy subjects.
Kuhn, Félix P; Feydy, Antoine; Launay, Nathalie; Lefevre-Colau, Marie-Martine; Poiraudeau, Serge; Laporte, Sébastien; Maier, Marc A; Lindberg, Pavel
2016-09-01
The study aims to assess the influence of neck extension on water diffusivity within the cervical spinal cord. IRB approved the study in 22 healthy volunteers. All subjects underwent anatomical MR and diffusion tensor imaging (DTI) at 1.5 T. The cervical cord was imaged in neutral (standard) position and extension. Segmental vertebral rotations were analyzed on sagittal T2-weighted images using the SpineView® software. Spinal cord diffusivity was measured in cross-sectional regions of interests at multiple levels (C1-C5). As a result of non-adapted coil geometry for spinal extension, 10 subjects had to be excluded. Image quality of the remaining 12 subjects was good without any deteriorating artifacts. Quantitative measurements of vertebral rotation angles and diffusion parameters showed good intra-rater reliability (ICC = 0.84-0.99). DTI during neck extension revealed significantly decreased fractional anisotropy (FA) and increased radial diffusivity (RD) at the C3 level and increased apparent diffusion coefficients (ADC) at the C3 and C4 levels (p < 0.01 Bonferroni corrected). The C3/C4 level corresponded to the maximal absolute change in segmental vertebral rotation between the two positions. The increase in RD correlated positively with the degree of global extension, i.e., the summed vertebral rotation angle between C1 and C5 (R = 0.77, p = 0.006). Our preliminary results suggest that DTI can quantify changes in water diffusivity during cervical spine extension. The maximal differences in segmental vertebral rotation corresponded to the levels with significant changes in diffusivity (C3/C4). Consequently, kinetic DTI measurements may open new perspectives in the assessment of neural tissue under biomechanical constraints.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites
2017-01-01
Quality control of MRI is essential for excluding problematic acquisitions and avoiding bias in subsequent image processing and analysis. Visual inspection is subjective and impractical for large scale datasets. Although automated quality assessments have been demonstrated on single-site datasets, it is unclear that solutions can generalize to unseen data acquired at new sites. Here, we introduce the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier. Our tool can be run both locally and as a free online service via the OpenNeuro.org portal. The classifier is trained on a publicly available, multi-site dataset (17 sites, N = 1102). We perform model selection evaluating different normalization and feature exclusion approaches aimed at maximizing across-site generalization and estimate an accuracy of 76%±13% on new sites, using leave-one-site-out cross-validation. We confirm that result on a held-out dataset (2 sites, N = 265) also obtaining a 76% accuracy. Even though the performance of the trained classifier is statistically above chance, we show that it is susceptible to site effects and unable to account for artifacts specific to new sites. MRIQC performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement which might require more labeled data and new approaches to the between-site variability. Overcoming these limitations is crucial for a more objective quality assessment of neuroimaging data, and to enable the analysis of extremely large and multi-site samples. PMID:28945803
Probabilistic segmentation and intensity estimation for microarray images.
Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro
2006-01-01
We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.
NASA Astrophysics Data System (ADS)
Dong, Kyung-Rae; Goo, Eun-Hoe; Lee, Jae-Seung; Chung, Woon-Kwan
2013-01-01
A consecutive series of 50 patients (28 males and 22 females) who underwent hepatic magnetic resonance imaging (MRI) from August to December 2011 were enrolled in this study. The appropriate parameters for abdominal MRI scans were determined by comparing the images (TE = 90 and 128 msec) produced using the half-Fourier acquisition single-shot turbo spin-echo (HASTE) technique at different signal acquisition times. The patients consisted of 15 normal patients, 25 patients with a hepatoma and 10 patients with a hemangioma. The TE in a single patient was set to either 90 msec or 128 msec. This was followed by measurements using the four normal rendering methods of the biliary tract system and the background signal intensity using the maximal signal intensity techniques in the liver, spleen, pancreas, gallbladder, fat, muscles and hemangioma. The signal-to-noise and the contrast-to-noise ratios were obtained. The image quality was assessed subjectively, and the results were compared. The signal-to-noise and the contrast-to-noise ratios were significantly higher at TE = 128 msec than at TE = 90 when diseases of the liver, spleen, pancreas, gallbladder, and fat and muscles, hepatocellular carcinomas and hemangiomas, and rendering the hepatobiliary tract system based on the maximum signal intensity technique were involved (p < 0.05). In addition, the presence of artifacts, the image clarity and the overall image quality were excellent at TE = 128 msec (p < 0.05). In abdominal MRI, the breath-hold half-Fourier acquisition single-shot turbo spin-echo (HASTE) was found to be effective in illustrating the abdominal organs for TE = 128 msec. Overall, the image quality at TE = 128 msec was better than that at TE = 90 msec due to the improved signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Overall, the HASTE technique for abdominal MRI based on a high-magnetic field (3.0 T) at a TE of 128 msec can provide useful data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Fujita, A; Buch, K
Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIsmore » over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast, subtle texture changes in addition to human observers.« less
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Sleep quality and duration are associated with performance in maximal incremental test.
Antunes, B M; Campos, E Z; Parmezzani, S S; Santos, R V; Franchini, E; Lira, F S
2017-08-01
Inadequate sleep patterns may be considered a trigger to development of several metabolic diseases. Additionally, sleep deprivation and poor sleep quality can negatively impact performance in exercise training. However, the impact of sleep duration and sleep quality on performance during incremental maximal test performed by healthy men is unclear. Therefore, the purpose of the study was to analyze the association between sleep pattern (duration and quality) and performance during maximal incremental test in healthy male individuals. A total of 28 healthy males volunteered to take part in the study. Sleep quality, sleep duration and physical activity were subjectively assessed by questionnaires. Sleep pattern was classified by sleep duration (>7h or <7h of sleep per night) and sleep quality according to the sum of measured points and/or scores by the Pittsburgh Sleep Quality Index (PSQI). Incremental exercise test was performed at 35 watts for untrained subjects, 70 watts for physically active subjects and 105 watts for well-trained subjects. HR max was correlated with sleep quality (r=0.411, p=0.030) and sleep duration (r=-0.430, p=0.022). Participants reporting good sleep quality presented higher values of W max , VO 2max and lower values of HR max when compared to participants with altered sleep. Regarding sleep duration, only W max was influenced by the amount of sleeping hours per night and this association remained significant even after adjustment by VO 2max . Sleep duration and quality are associated, at least in part, with performance during maximal incremental test among healthy men, with losses in W max and HR max . In addition, our results suggest that the relationship between sleep patterns and performance, mainly in W max , is independent of fitness condition. Copyright © 2017 Elsevier Inc. All rights reserved.
Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.
van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M
2017-01-01
X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.
TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, S; Nelson, J; Samei, E
2016-06-15
Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less
Lee, Yuna S H; Stone, Patricia W; Pogorzelska-Maziarz, Monika; Nembhard, Ingrid M
Central line-associated bloodstream infections (CLABSIs) are a common and costly quality problem, and their prevention is a national priority. A decade ago, researchers identified an evidence-based bundle of practices that reduce CLABSIs. Compliance with this bundle remains low in many hospitals. The aim of this study was to assess whether differences in core aspects of work environments-workload, quality of relationships, and prioritization of quality-are associated with variation in maximal CLABSI bundle compliance, that is, compliance 95%-100% of the time in intensive care units (ICUs). A cross-sectional study of hospital medical-surgical ICUs in the United States was done. Data on work environment and bundle compliance were obtained from the Prevention of Nosocomial Infections and Cost-Effectiveness Refined Survey completed in 2011 by infection prevention directors, and data on ICU and hospital characteristics were obtained from the National Healthcare Safety Network. Factor and multilevel regression analyses were conducted. Reasonable workload and prioritization of quality were positively associated with maximal CLABSI bundle compliance. High-quality relationships, although a significant predictor when evaluated apart from workload and prioritization of quality, had no significant effect after accounting for these two factors. Aspects of the staff work environment are associated with maximal CLABSI bundle compliance in ICUs. Our results suggest that hospitals can foster improvement in ensuring maximal CLABSI bundle compliance-a crucial precursor to reducing CLABSI infection rates-by establishing reasonable workloads and prioritizing quality.
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imaging mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 microarcsecond imaging. To achieve mission requirements, MAXIM will have to improve on pointing by orders of magnitude. This pointing requirement impacts the control and design of the formation. Currently the architecture is comprised of 25 spacecraft, which will form the sparse apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This configuration will deploy 24 spacecraft as optics modules and one as the detector. The formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. In this paper, we provide analysis and trades of several control efforts that are dependent upon the pointing requirements and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions that minimize the control efforts and we address both continuous and discrete control via LQR and feedback linearization. Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
[Design and analysis of a novel light visible spectrum imaging spectrograph optical system].
Shen, Man-de; Li, Fei; Zhou, Li-bing; Li, Cheng; Ren, Huan-huan; Jiang, Qing-xiu
2015-02-01
A novel visible spectrum imaging spectrograph optical system was proposed based on the negative dispersion, the arbitrary phase modulation characteristics of diffractive optical element and the aberration correction characteristics of freeform optical element. The double agglutination lens was substituted by a hybrid refractive/diffractive lens based on the negative dispersion of diffractive optical element. Two freeform optical elements were used in order to correct some aberration based on the aberration correction characteristics of freeform optical element. An example and frondose design process were presented. When the design parameters were uniform, compared with the traditional system, the novel visible spectrum imaging spectrograph optical system's weight was reduced by 22.9%, the total length was reduced by 26.6%, the maximal diameter was reduced by 30.6%, and the modulation transfer function (MTF) in 1.0 field-of-view was improved by 0.35 with field-of-view improved maximally. The maximal distortion was reduced by 1.6%, the maximal longitudinal aberration was reduced by 56.4%, and the lateral color aberration was reduced by 59. 3%. From these data, we know that the performance of the novel system was advanced quickly and it could be used to put forward a new idea for modern visible spectrum imaging spectrograph optical system design.
Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes
NASA Astrophysics Data System (ADS)
Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen
2017-09-01
Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)
2001-01-01
Multi-purpose remote-sensing products from various satellites have proved crucial in developing global estimates of precipitation. Examples of these products include low-earth-orbit and geosynchronous-orbit infrared (leo- and geo-IR), Outgoing Longwave Radiation (OLR), Television Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) data, and passive microwave data such as that from the Special Sensor Microwave/ Imager (SSM/I). Each of these datasets has served as the basis for at least one useful quasi-global precipitation estimation algorithm; however, the quality of estimates varies tremendously among the algorithms for the different climatic regions around the globe.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
Abbaspour, Samira; Tanha, Kaveh; Mahmoudian, Babak; Assadi, Majid; Pirayesh Islamian, Jalil
2018-04-22
Collimator geometry has an important contribution on the image quality in SPECT imaging. The purpose of this study was to investigate the effect of parallel hole collimator hole-size on the functional parameters (including the spatial resolution and sensitivity) and the image quality of a HiReSPECT imaging system using SIMIND Monte Carlo program. To find a proper trade-off between the sensitivity and spatial resolution, the collimator with hole diameter ranges of 0.3-1.5 mm (in steps of 0.3 mm) were used with a fixed septal and hole thickness values (0.2 mm and 34 mm, respectively). Lead, Gold, and Tungsten as the LEHR collimator material were also investigated. The results on a 99m Tc point source scanning with the experimental and also simulated systems were matched to validate the simulated imaging system. The results on the simulation showed that decreasing the collimator hole size, especially in the Gold collimator, improved the spatial resolution to 18% and 3.2% compared to the Lead and the Tungsten, respectively. Meanwhile, the Lead collimator provided a good sensitivity in about of 7% and 8% better than that of Tungsten and Gold, respectively. Overall, the spatial resolution and sensitivity showed small differences among the three types of collimator materials assayed within the defined energy. By increasing the hole size, the Gold collimator produced lower scatter and penetration fractions than Tungsten and Lead collimator. The minimum detectable size of hot rods in micro-Jaszczak phantom on the iterative maximum-likelihood expectation maximization (MLEM) reconstructed images, were determined in the sectors of 1.6, 1.8, 2.0, 2.4 and 2.6 mm for scanning with the collimators in hole sizes of 0.3, 0.6, 0.9, 1.2 and 1.5 mm at a 5 cm distance from the phantom. The Gold collimator with hole size of 0.3 mm provided a better image quality with the HiReSPECT imaging. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.
2015-01-01
Purpose Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting “self-calibration” was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard (“true”) calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the “self” and “true” calibration methods were on the order of 10−3 mm−1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion The proposed geometric “self” calibration provides a means for 3D imaging on general non-circular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced “task-based” 3D imaging methods now in development for robotic C-arms. PMID:26388661
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Can Monkeys Make Investments Based on Maximized Pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-01-01
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777
NASA Technical Reports Server (NTRS)
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.
Sulai, Yusufu N; Dubra, Alfredo
2014-09-01
The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.
Colonoscopy video quality assessment using hidden Markov random fields
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.
Formation Control for the Maxim Mission.
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the spacebased scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today's technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. The Stellar Imager mission requirements are on the same order of those for MAXIM. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; (2) the formation control architecture devised for such missions; (3) the design of the formation control laws to maintain very high precision relative positions; and (4) the levels of fuel usage required in the duration of these missions. Specific preliminary results are presented for two spacecraft within the MAXIM mission.
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Preboske, Gregory M; Gunter, Jeff L; Ward, Chadwick P; Jack, Clifford R
2006-05-01
Measuring rates of brain atrophy from serial magnetic resonance imaging (MRI) studies is an attractive way to assess disease progression in neurodegenerative disorders, particularly Alzheimer's disease (AD). A widely recognized approach is the boundary shift integral (BSI). The objective of this study was to evaluate how several common scan non-idealities affect the output of the BSI algorithm. We created three types of image non-idealities between the image volumes in a serial pair used to measure between-scan change: inconsistent image contrast between serial scans, head motion, and poor signal-to-noise (SNR). In theory the BSI volume difference measured between each pair of images should be zero and any deviation from zero should represent corruption of the BSI measurement by some non-ideality intentionally introduced into the second scan in the pair. Two different BSI measures were evaluated, whole brain and ventricle. As the severity of motion, noise, and non-congruent image contrast increased in the second scan, the calculated BSI values deviated progressively more from the expected value of zero. This study illustrates the magnitude of the error in measures of change in brain and ventricle volume across serial MRI scans that can result from commonly encountered deviations from ideal image quality. The magnitudes of some of the measurement errors seen in this study exceed the disease effect in AD shown in various publications, which range from 1% to 2.78% per year for whole brain atrophy and 5.4% to 13.8% per year for ventricle expansion (Table 1). For example, measurement error may exceed 100% if image contrast properties dramatically differ between the two scans in a measurement pair. Methods to maximize consistency of image quality over time are an essential component of any quantitative serial MRI study.
Masaki, Mitsuhiro; Ikezoe, Tome; Fukumoto, Yoshihiro; Minami, Seigo; Aoyama, Junichi; Ibuki, Satoko; Kimura, Misaka; Ichihashi, Noriaki
2016-06-01
Age-related change of spinal alignment in the standing position is known to be associated with decreases in walking speed, and alteration in muscle quantity (i.e., muscle mass) and muscle quality (i.e., increases in the amount of intramuscular non-contractile tissue) of lumbar back muscles. Additionally, the lumbar lordosis angle in the standing position is associated with walking speed, independent of lower-extremity muscle strength, in elderly individuals. However, it is unclear whether spinal alignment in the standing position is associated with walking speed in the elderly, independent of trunk muscle quantity and quality. The present study investigated the association of usual and maximum walking speed with age, sagittal spinal alignment in the standing position, muscle quantity measured as thickness, and quality measured as echo intensity of lumbar muscles in 35 middle-aged and elderly women. Sagittal spinal alignment in the standing position (thoracic kyphosis, lumbar lordosis, and sacral anterior inclination angle) using a spinal mouse, and muscle thickness and echo intensity of the lumbar muscles (erector spinae, psoas major, and lumbar multifidus) using an ultrasound imaging device were also measured. Stepwise regression analysis showed that only age was a significant determinant of usual walking speed. The thickness of the lumbar erector spinae muscle was a significant, independent determinant of maximal walking speed. The results of this study suggest that a decrease in maximal walking speed is associated with the decrease in lumbar erector spinae muscles thickness rather than spinal alignment in the standing position in middle-aged and elderly women.
Maximizing fluorescence collection efficiency in multiphoton microscopy
Zinter, Joseph P.; Levene, Michael J.
2011-01-01
Understanding fluorescence propagation through a multiphoton microscope is of critical importance in designing high performance systems capable of deep tissue imaging. Optical models of a scattering tissue sample and the Olympus 20X 0.95NA microscope objective were used to simulate fluorescence propagation as a function of imaging depth for physiologically relevant scattering parameters. The spatio-angular distribution of fluorescence at the objective back aperture derived from these simulations was used to design a simple, maximally efficient post-objective fluorescence collection system. Monte Carlo simulations corroborated by data from experimental tissue phantoms demonstrate collection efficiency improvements of 50% – 90% over conventional, non-optimized fluorescence collection geometries at large imaging depths. Imaging performance was verified by imaging layer V neurons in mouse cortex to a depth of 850 μm. PMID:21934897
Cascaded systems analysis of noise and detectability in dual-energy cone-beam CT
Gang, Grace J.; Zbijewski, Wojciech; Webster Stayman, J.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: Dual-energy computed tomography and dual-energy cone-beam computed tomography (DE-CBCT) are promising modalities for applications ranging from vascular to breast, renal, hepatic, and musculoskeletal imaging. Accordingly, the optimization of imaging techniques for such applications would benefit significantly from a general theoretical description of image quality that properly incorporates factors of acquisition, reconstruction, and tissue decomposition in DE tomography. This work reports a cascaded systems analysis model that includes the Poisson statistics of x rays (quantum noise), detector model (flat-panel detectors), anatomical background, image reconstruction (filtered backprojection), DE decomposition (weighted subtraction), and simple observer models to yield a task-based framework for DE technique optimization. Methods: The theoretical framework extends previous modeling of DE projection radiography and CBCT. Signal and noise transfer characteristics are propagated through physical and mathematical stages of image formation and reconstruction. Dual-energy decomposition was modeled according to weighted subtraction of low- and high-energy images to yield the 3D DE noise-power spectrum (NPS) and noise-equivalent quanta (NEQ), which, in combination with observer models and the imaging task, yields the dual-energy detectability index (d′). Model calculations were validated with NPS and NEQ measurements from an experimental imaging bench simulating the geometry of a dedicated musculoskeletal extremities scanner. Imaging techniques, including kVp pair and dose allocation, were optimized using d′ as an objective function for three example imaging tasks: (1) kidney stone discrimination; (2) iodine vs bone in a uniform, soft-tissue background; and (3) soft tissue tumor detection on power-law anatomical background. Results: Theoretical calculations of DE NPS and NEQ demonstrated good agreement with experimental measurements over a broad range of imaging conditions. Optimization results suggest a lower fraction of total dose imparted by the low-energy acquisition, a finding consistent with previous literature. The selection of optimal kVp pair reveals the combined effect of both quantum noise and contrast in the kidney stone discrimination and soft-tissue tumor detection tasks, whereas the K-edge effect of iodine was the dominant factor in determining kVp pairs in the iodine vs bone task. The soft-tissue tumor task illustrated the benefit of dual-energy imaging in eliminating anatomical background noise and improving detectability beyond that achievable by single-energy scans. Conclusions: This work established a task-based theoretical framework that is predictive of DE image quality. The model can be utilized in optimizing a broad range of parameters in image acquisition, reconstruction, and decomposition, providing a useful tool for maximizing DE-CBCT image quality and reducing dose. PMID:22894440
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data
NASA Technical Reports Server (NTRS)
Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.
2012-01-01
Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat images for the 2000 epoch. As surface reflectance likely will be a standard product for future Landsat missions, the approach developed in this study can be adapted as an operational quality assessment system for those missions.
NEMA NU-4 performance evaluation of PETbox4, a high sensitivity dedicated PET preclinical tomograph
NASA Astrophysics Data System (ADS)
Gu, Z.; Taschereau, R.; Vu, N. T.; Wang, H.; Prout, D. L.; Silverman, R. W.; Bai, B.; Stout, D. B.; Phelps, M. E.; Chatziioannou, A. F.
2013-06-01
PETbox4 is a new, fully tomographic bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. This manuscript characterizes the performance of the prototype system using the National Electrical Manufacturers Association NU 4-2008 standards, including studies of sensitivity, spatial resolution, energy resolution, scatter fraction, count-rate performance and image quality. The PETbox4 performance is also compared with the performance of PETbox, a previous generation limited angle tomography system. PETbox4 consists of four opposing flat-panel type detectors arranged in a box-like geometry. Each panel is made by a 24 × 50 pixelated array of 1.82 × 1.82 × 7 mm bismuth germanate scintillation crystals with a crystal pitch of 1.90 mm. Each of these scintillation arrays is coupled to two Hamamatsu H8500 photomultiplier tubes via a glass light guide. Volumetric images for a 45 × 45 × 95 mm field of view (FOV) are reconstructed with a maximum likelihood expectation maximization algorithm incorporating a system model based on a parameterized detector response. With an energy window of 150-650 keV, the peak absolute sensitivity is approximately 18% at the center of FOV. The measured crystal energy resolution ranges from 13.5% to 48.3% full width at half maximum (FWHM), with a mean of 18.0%. The intrinsic detector spatial resolution is 1.5 mm FWHM in both transverse and axial directions. The reconstructed image spatial resolution for different locations in the FOV ranges from 1.32 to 1.93 mm, with an average of 1.46 mm. The peak noise equivalent count rate for the mouse-sized phantom is 35 kcps for a total activity of 1.5 MBq (40 µCi) and the scatter fraction is 28%. The standard deviation in the uniform region of the image quality phantom is 5.7%. The recovery coefficients range from 0.10 to 0.93. In comparison to the first generation two panel PETbox system, PETbox4 achieves substantial improvements on sensitivity and spatial resolution. The overall performance demonstrates that the PETbox4 scanner is suitable for producing high quality images for molecular imaging based biomedical research.
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
de Medeiros, Ana Irene Carlos; Fuzari, Helen Kerlen Bastos; Rattesa, Catarina; Brandão, Daniella Cunha; de Melo Marinho, Patrícia Érika
2017-04-01
Does inspiratory muscle training improve respiratory muscle strength, functional capacity, lung function and quality of life of patients with chronic kidney disease? Does inspiratory muscle training improve these outcomes more than breathing exercises? Systematic review and meta-analysis of randomised trials. People with chronic kidney disease undergoing dialysis treatment. The primary outcomes were: maximal inspiratory pressure, maximal expiratory pressure, and distance covered on the 6-minute walk test. The secondary outcomes were: forced vital capacity, forced expiratory volume in the first second (FEV 1 ), and quality of life. The search identified four eligible studies. The sample consisted of 110 participants. The inspiratory muscle training used a Threshold ® or PowerBreathe ® device, with a load ranging from 30 to 60% of the maximal inspiratory pressure and lasting from 6 weeks to 6 months. The studies showed moderate to high risk of bias, and the quality of the evidence was rated low or very low, due to the studies' methodological limitations. The meta-analysis showed that inspiratory muscle training significantly improved maximal inspiratory pressure (MD 23 cmH 2 O, 95% CI 16 to 29) and the 6-minute walk test distance (MD 80m, 95% CI 41 to 119) when compared with controls. Significant benefits in lung function and quality of life were also identified. When compared to breathing exercises, significant benefits were identified in maximal expiratory pressure (MD 6 cmH 2 O, 95% CI 2 to 10) and FEV 1 (MD 0.24litres 95% CI 0.14 to 0.34), but not maximal inspiratory pressure or forced vital capacity. In patients with chronic renal failure on dialysis, inspiratory muscle training with a fixed load significantly improves respiratory muscle strength, functional capacity, lung function and quality of life. The evidence for these benefits may be influenced by some sources of bias. PROSPERO (CRD 42015029986). [de Medeiros AIC, Fuzari HKB, Rattesa C, Brandão DC, de Melo Marinho PÉ (2017) Inspiratory muscle training improves respiratory muscle strength, functional capacity and quality of life in patients with chronic kidney disease: a systematic review. Journal of Physiotherapy 63: 76-83]. Copyright © 2017 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.
Meissner, Karin; Schweizer-Arau, Annemarie; Limmer, Anna; Preibisch, Christine; Popovici, Roxana M; Lange, Isabel; de Oriol, Barbara; Beissner, Florian
2016-11-01
To evaluate whether psychotherapy with somatosensory stimulation is effective for the treatment of pain and quality of life in patients with endometriosis-related pain. Patients with a history of endometriosis and chronic pelvic pain were randomized to either psychotherapy with somatosensory stimulation (ie, different techniques of acupuncture point stimulation) or wait-list control for 3 months, after which all patients were treated. The primary outcome was brain connectivity assessed by functional magnetic resonance imaging. Prespecified secondary outcomes included pain on 11-point numeric rating scales (maximal and average global pain, pelvic pain, dyschezia, and dyspareunia) and physical and mental quality of life. A sample size of 30 per group was planned to compare outcomes in the treatment group and the wait-list control group. From March 2010 through March 2012, 67 women (mean age 35.6 years) were randomly allocated to intervention (n=35) or wait-list control (n=32). In comparison with wait-list controls, treated patients showed improvements after 3 months in maximal global pain (mean group difference -2.1, 95% confidence interval [CI] -3.4 to -0.8; P=.002), average global pain (-2.5, 95% CI -3.5 to -1.4; P<.001), pelvic pain (-1.4, 95% CI -2.7 to -0.1; P=.036), dyschezia (-3.5, 95% CI -5.8 to -1.3; P=.003), physical quality of life (3.8, 95% CI 0.5-7.1, P=.026), and mental quality of life (5.9, 95% CI 0.6-11.3; P=.031); dyspareunia improved nonsignificantly (-1.8, 95% CI -4.4 to 0.7; P=.150). Improvements in the intervention group remained stable at 6 and 24 months, and control patients showed comparable symptom relief after delayed intervention. Psychotherapy with somatosensory stimulation reduced global pain, pelvic pain, and dyschezia and improved quality of life in patients with endometriosis. After 6 and 24 months, when all patients were treated, both groups showed stable improvements. ClinicalTrials.gov, https://clinicaltrials.gov, NCT01321840.
Non-common path aberration correction in an adaptive optics scanning ophthalmoscope
Sulai, Yusufu N.; Dubra, Alfredo
2014-01-01
The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth. PMID:25401020
A robotic C-arm cone beam CT system for image-guided proton therapy: design and performance.
Hua, Chiaho; Yao, Weiguang; Kidani, Takao; Tomida, Kazuo; Ozawa, Saori; Nishimura, Takenori; Fujisawa, Tatsuya; Shinagawa, Ryousuke; Merchant, Thomas E
2017-11-01
A ceiling-mounted robotic C-arm cone beam CT (CBCT) system was developed for use with a 190° proton gantry system and a 6-degree-of-freedom robotic patient positioner. We report on the mechanical design, system accuracy, image quality, image guidance accuracy, imaging dose, workflow, safety and collision-avoidance. The robotic CBCT system couples a rotating C-ring to the C-arm concentrically with a kV X-ray tube and a flat-panel imager mounted to the C-ring. CBCT images are acquired with flex correction and maximally 360° rotation for a 53 cm field of view. The system was designed for clinical use with three imaging locations. Anthropomorphic phantoms were imaged to evaluate the image guidance accuracy. The position accuracy and repeatability of the robotic C-arm was high (<0.5 mm), as measured with a high-accuracy laser tracker. The isocentric accuracy of the C-ring rotation was within 0.7 mm. The coincidence of CBCT imaging and radiation isocentre was better than 1 mm. The average image guidance accuracy was within 1 mm and 1° for the anthropomorphic phantoms tested. Daily volumetric imaging for proton patient positioning was specified for routine clinical practice. Our novel gantry-independent robotic CBCT system provides high-accuracy volumetric image guidance for proton therapy. Advances in knowledge: Ceiling-mounted robotic CBCT provides a viable option than CT on-rails for partial gantry and fixed-beam proton systems with the added advantage of acquiring images at the treatment isocentre.
NASA Technical Reports Server (NTRS)
Herzig, Howard; Fleetwood, Charles M., Jr.; Toft, Albert R.
1992-01-01
Sample window materials tested during the development of a domed magnesium fluoride detector window for the Hubble Space Telescope's Imaging Spectrograph are noted to exhibit wide variability in VUV transmittance; a test program was accordingly instituted to maximize a prototype domed window's transmittance. It is found that VUV transmittance can be maximized if the boule from which the window is fashioned is sufficiently large to allow such a component to be cut from the purest available portion of the boule.
Mechanism of Chronic Pain in Rodent Brain Imaging
NASA Astrophysics Data System (ADS)
Chang, Pei-Ching
Chronic pain is a significant health problem that greatly impacts the quality of life of individuals and imparts high costs to society. Despite intense research effort in understanding of the mechanism of pain, chronic pain remains a clinical problem that has few effective therapies. The advent of human brain imaging research in recent years has changed the way that chronic pain is viewed. To further extend the use of human brain imaging techniques for better therapies, the adoption of imaging technique onto the animal pain models is essential, in which underlying brain mechanisms can be systematically studied using various combination of imaging and invasive techniques. The general goal of this thesis is to addresses how brain develops and maintains chronic pain in an animal model using fMRI. We demonstrate that nucleus accumbens, the central component of mesolimbic circuitry, is essential in development of chronic pain. To advance our imaging technique, we develop an innovative methodology to carry out fMRI in awake, conscious rat. Using this cutting-edge technique, we show that allodynia is assoicated with shift brain response toward neural circuits associated nucleus accumbens and prefrontal cortex that regulate affective and cognitive component of pain. Taken together, this thesis provides a deeper understanding of how brain mediates pain. It builds on the existing body of knowledge through maximizing the depth of insight into brain imaging of chronic pain.
Wang, Heyan; Lu, Zhengang; Liu, Yeshu; Tan, Jiubin; Ma, Limin; Lin, Shen
2017-04-15
We report a nested multi-ring array metallic mesh (NMA-MM) that shows a highly uniform diffraction pattern theoretically and experimentally. Then a high-performance transparent electromagnetic interference (EMI) shielding structure is constituted by the double-layer interlaced NMA-MMs separated by transparent quartz-glass substrate. Experimental results show that double-layer interlaced NMA-MM structure exhibits a shielding effectiveness (SE) of over 27 dB in the Ku-band, with a maximal SE of 37 dB at 12 GHz, normalized optical transmittance of 90%, and minimal image quality degradation due to the interlaced arrangement. It thus shows great potential for practical applications in transparent EMI shielding devices.
Hughes integrated synthetic aperture radar: High performance at low cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayma, R.W.
1996-11-01
This paper describes the background and development of the low cost high-performance Hughes Integrated Synthetic Aperture Radar (HISAR{trademark}) which has a full range of capabilities for real-time reconnaissance, surveillance and earth resource mapping. HISAR uses advanced Synthetic Aperture Radar (SAR) technology to make operationally effective images of near photo quality, day or night and in all weather conditions. This is achieved at low cost by maximizing the use of commercially available radar and signal-processing equipment in the fabrication. Furthermore, HISAR is designed to fit into an executive-class aircraft making it available for a wide range of users. 4 refs., 8more » figs.« less
Automated prescription of oblique brain 3D magnetic resonance spectroscopic imaging.
Ozhinsky, Eugene; Vigneron, Daniel B; Chang, Susan M; Nelson, Sarah J
2013-04-01
Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to automate completely the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of outer-volume suppression saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from six exams from three healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin
2018-02-01
This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.
IP Subsurface Imaging in the Presence of Buried Steel Infrastructure
NASA Astrophysics Data System (ADS)
Smart, N. H.; Everett, M. E.
2017-12-01
The purpose of this research is to explore the use of induced polarization to image closely-spaced steel columns at a controlled test site. Texas A&M University's Riverside Campus (RELLIS) was used as a control test site to examine the difference between actual and remotely-sensed observed depths. Known borehole depths and soil composition made this site ideal. The subsurface metal structures were assessed using a combination of ER (Electrical Resistivity) and IP (Induced Polarization), and later processed using data inversion. Surveying was set up in reference to known locations and depths of steel structures in order to maximize control data quality. In comparing of known and remotely-sensed foundation depths a series of questions is raised regarding how percent error between imaged and actual depths can be lowered. We are able to draw questions from the results of our survey, as we compare them with the known depth and width of the metal beams. As RELLIS offers a control for us to conduct research, ideal survey geometry and inversion parameters can be met to achieve optimal results and resolution
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimized energy of spectral CT for infarct imaging: Experimental validation with human validation.
Sandfort, Veit; Palanisamy, Srikanth; Symons, Rolf; Pourmorteza, Amir; Ahlman, Mark A; Rice, Kelly; Thomas, Tom; Davies-Venn, Cynthia; Krauss, Bernhard; Kwan, Alan; Pandey, Ankur; Zimmerman, Stefan L; Bluemke, David A
Late contrast enhancement visualizes myocardial infarction, but the contrast to noise ratio (CNR) is low using conventional CT. The aim of this study was to determine if spectral CT can improve imaging of myocardial infarction. A canine model of myocardial infarction was produced in 8 animals (90-min occlusion, reperfusion). Later, imaging was performed after contrast injection using CT at 90 kVp/150 kVpSn. The following reconstructions were evaluated: Single energy 90 kVp, mixed, iodine map, multiple monoenergetic conventional and monoenergetic noise optimized reconstructions. Regions of interest were measured in infarct and remote regions to calculate contrast to noise ratio (CNR) and Bhattacharya distance (a metric of the differentiation between regions). Blinded assessment of image quality was performed. The same reconstruction methods were applied to CT scans of four patients with known infarcts. For animal studies, the highest CNR for infarct vs. myocardium was achieved in the lowest keV (40 keV) VMo images (CNR 4.42, IQR 3.64-5.53), which was superior to 90 kVp, mixed and iodine map (p = 0.008, p = 0.002, p < 0.001, respectively). Compared to 90 kVp and iodine map, the 40 keV VMo reconstructions showed significantly higher histogram separation (p = 0.042 and p < 0.0001, respectively). The VMo reconstructions showed the highest rate of excellent quality scores. A similar pattern was seen in human studies, with CNRs for infarct maximized at the lowest keV optimized reconstruction (CNR 4.44, IQR 2.86-5.94). Dual energy in conjunction with noise-optimized monoenergetic post-processing improves CNR of myocardial infarct delineation by approximately 20-25%. Published by Elsevier Inc.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Spatio-spectral color filter array design for optimal image recovery.
Hirakawa, Keigo; Wolfe, Patrick J
2008-10-01
In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar
2015-01-01
Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Non-rigid registration for fusion of carotid vascular ultrasound and MRI volumetric datasets
NASA Astrophysics Data System (ADS)
Chan, R. C.; Sokka, S.; Hinton, D.; Houser, S.; Manzke, R.; Hanekamp, A.; Reddy, V. Y.; Kaazempur-Mofrad, M. R.; Rasche, V.
2006-03-01
In carotid plaque imaging, MRI provides exquisite soft-tissue characterization, but lacks the temporal resolution for tissue strain imaging that real-time 3D ultrasound (3DUS) can provide. On the other hand, real-time 3DUS currently lacks the spatial resolution of carotid MRI. Non-rigid alignment of ultrasound and MRI data is essential for integrating complementary morphology and biomechanical information for carotid vascular assessment. We assessed non-rigid registration for fusion of 3DUS and MRI carotid data based on deformable models which are warped to maximize voxel similarity. We performed validation in vitro using isolated carotid artery imaging. These samples were subjected to soft-tissue deformations during 3DUS and were imaged in a static configuration with standard MR carotid pulse sequences. Registration of the source ultrasound sequences to the target MR volume was performed and the mean absolute distance between fiducials within the ultrasound and MR datasets was measured to determine inter-modality alignment quality. Our results indicate that registration errors on the order of 1mm are possible in vitro despite the low-resolution of current generation 3DUS transducers. Registration performance should be further improved with the use of higher frequency 3DUS prototypes and efforts are underway to test those probes for in vivo 3DUS carotid imaging.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Quality competition and uncertainty in a horizontally differentiated hospital market.
Montefiori, Marcello
2014-01-01
The chapter studies hospital competition in a spatially differentiated market in which patient demand reflects the quality/distance mix that maximizes their utility. Treatment is free at the point of use and patients freely choose the provider which best fits their expectations. Hospitals might have asymmetric objectives and costs, however they are reimbursed using a uniform prospective payment. The chapter provides different equilibrium outcomes, under perfect and asymmetric information. The results show that asymmetric costs, in the case where hospitals are profit maximizers, allow for a social welfare and quality improvement. On the other hand, the presence of a publicly managed hospital which pursues the objective of quality maximization is able to ensure a higher level of quality, patient surplus and welfare. However, the extent of this outcome might be considerably reduced when high levels of public hospital inefficiency are detectable. Finally, the negative consequences caused by the presence of asymmetric information are highlighted in the different scenarios of ownership/objectives and costs. The setting adopted in the model aims at describing the up-coming European market for secondary health care, focusing on hospital behavior and it is intended to help the policy-maker in understanding real world dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Jing; Read, Paul W.; Baisden, Joseph M.
Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less
Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke
2007-11-01
To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, O; Yuan, J; Law, M
Purpose: Signal-to-noise ratio(SNR) of MR abdominal imaging in diagnostic radiology is maximized by minimizing the coil-to-patient distance. However, for radiotherapy applications, customized vacuum-bag is needed for abdominal immobilization at the cost of the increasing distance to the posterior spine coil. This sub-optimized coil setting for RT applications may compromise image quality, such as SNR and homogeneity, thus potentially affect tissue delineation. In this study, we quantitatively evaluate the effect of the vertical position change on SNR and image quality change using an ACR MR phantom. Methods: An ACR MR phantom was placed on the flat couch top. Images were acquiredmore » using an 18-channel body array coil and spine coil on a dedicated 1.5T MR-simulator. The scan was repeated three times with the ACR phantom elevated up to 7.5cm from the couch top, with a step size of 2.5cm. All images were acquired using standard ACR test sequence protocol of 2D spin-echo T1-weighted(TR/TE=500/200ms) and T2-weighted(TR/TE1/TE2=2000/20/80) sequences. For all scans, pre-scan normalization was turned on, and the distance between the phantom and the anterior 18-channel body array coil was kept constant. SNR was calculated using the slice with a large water-only region of the phantom. Percent intensity uniformity(PIU) and low contrast object detectability(LCD) were assessed by following ACR test guidelines. Results: The decrease in image SNR(from 335.8 to 169.3) and LCD(T1: from 31 to 19 spokes, T2: 26 to 16 spokes) were observed with increasing vertical distance. After elevating the phantom by 2.5cm(approximately the thickness of standard vacuum-bag), SNR change(from 335.8 to 275.5) and LCD(T1: 31 to 26 spokes, T2: 26 to 21 spokes) change were noted. However, similar PIU was obtained for all choices of vertical distance (T1: 94.5%–95.0%, T2: 94.4%–96.8%). Conclusion: After elevating the scan object, reduction in SNR level and contrast detectability but no change in image homogeneity was observed.« less
Maximizing Science Return from Future Mars Missions with Onboard Image Analyses
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bandari, E. B.; Roush, T. L.
2000-01-01
We have developed two new techniques to enhance science return and to decrease returned data volume for near-term Mars missions: 1) multi-spectral image compression and 2) autonomous identification and fusion of in-focus regions in an image series.
76 FR 8753 - Final Information Quality Guidelines Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
... DEPARTMENT OF HOMELAND SECURITY Final Information Quality Guidelines Policy AGENCY: Department of Homeland Security. ACTION: Notice and request for public comment on Final Information Quality Guidelines. SUMMARY: These guidelines should be used to ensure and maximize the quality of disseminated information...
75 FR 37819 - Proposed Information Quality Guidelines Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... DEPARTMENT OF HOMELAND SECURITY Proposed Information Quality Guidelines Policy ACTION: Notice and request for public comment on Proposed Information Quality Guidelines. SUMMARY: These guidelines should be used to ensure and maximize the quality of disseminated information. The Department's guidelines are...
Image quality comparison between single energy and dual energy CT protocols for hepatic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yuan, E-mail: yuanyao@stanford.edu; Pelc, Nor
Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition maymore » be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.« less
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
NASA Regional Planetary Image Facility
NASA Technical Reports Server (NTRS)
Arvidson, Raymond E.
2001-01-01
The Regional Planetary Image Facility (RPIF) provided access to data from NASA planetary missions and expert assistance about the data sets and how to order subsets of the collections. This ensures that the benefit/cost of acquiring the data is maximized by widespread dissemination and use of the observations and resultant collections. The RPIF provided education and outreach functions that ranged from providing data and information to teachers, involving small groups of highly motivated students in its activities, to public lectures and tours. These activities maximized dissemination of results and data to the educational and public communities.
Lu, Wenting; Yan, Hao; Gu, Xuejun; Tian, Zhen; Luo, Ouyang; Yang, Liu; Zhou, Linghong; Cervino, Laura; Wang, Jing; Jiang, Steve; Jia, Xun
2014-10-21
With the aim of maximally reducing imaging dose while meeting requirements for adaptive radiation therapy (ART), we propose in this paper a new cone beam CT (CBCT) acquisition and reconstruction method that delivers images with a low noise level inside a region of interest (ROI) and a relatively high noise level outside the ROI. The acquired projection images include two groups: densely sampled projections at a low exposure with a large field of view (FOV) and sparsely sampled projections at a high exposure with a small FOV corresponding to the ROI. A new algorithm combining the conventional filtered back-projection algorithm and the tight-frame iterative reconstruction algorithm is also designed to reconstruct the CBCT based on these projection data. We have validated our method on a simulated head-and-neck (HN) patient case, a semi-real experiment conducted on a HN cancer patient under a full-fan scan mode, as well as a Catphan phantom under a half-fan scan mode. Relative root-mean-square errors (RRMSEs) of less than 3% for the entire image and ~1% within the ROI compared to the ground truth have been observed. These numbers demonstrate the ability of our proposed method to reconstruct high-quality images inside the ROI. As for the part outside ROI, although the images are relatively noisy, it can still provide sufficient information for radiation dose calculations in ART. Dose distributions calculated on our CBCT image and on a standard CBCT image are in agreement, with a mean relative difference of 0.082% inside the ROI and 0.038% outside the ROI. Compared with the standard clinical CBCT scheme, an imaging dose reduction of approximately 3-6 times inside the ROI was achieved, as well as an 8 times outside the ROI. Regarding computational efficiency, it takes 1-3 min to reconstruct a CBCT image depending on the number of projections used. These results indicate that the proposed method has the potential for application in ART.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Benda, Nathalie M M; Seeger, Joost P H; Stevens, Guus G C F; Hijmans-Kersten, Bregina T P; van Dijk, Arie P J; Bellersen, Louise; Lamfers, Evert J P; Hopman, Maria T E; Thijssen, Dick H J
2015-01-01
Physical fitness is an important prognostic factor in heart failure (HF). To improve fitness, different types of exercise have been explored, with recent focus on high-intensity interval training (HIT). We comprehensively compared effects of HIT versus continuous training (CT) in HF patients NYHA II-III on physical fitness, cardiovascular function and structure, and quality of life, and hypothesize that HIT leads to superior improvements compared to CT. Twenty HF patients (male:female 19:1, 64±8 yrs, ejection fraction 38±6%) were allocated to 12-weeks of HIT (10*1-minute at 90% maximal workload-alternated by 2.5 minutes at 30% maximal workload) or CT (30 minutes at 60-75% of maximal workload). Before and after intervention, we examined physical fitness (incremental cycling test), cardiac function and structure (echocardiography), vascular function and structure (ultrasound) and quality of life (SF-36, Minnesota living with HF questionnaire (MLHFQ)). Training improved maximal workload, peak oxygen uptake (VO2peak) related to the predicted VO2peak, oxygen uptake at the anaerobic threshold, and maximal oxygen pulse (all P<0.05), whilst no differences were present between HIT and CT (N.S.). We found no major changes in resting cardiovascular function and structure. SF-36 physical function score improved after training (P<0.05), whilst SF-36 total score and MLHFQ did not change after training (N.S.). Training induced significant improvements in parameters of physical fitness, although no evidence for superiority of HIT over CT was demonstrated. No major effect of training was found on cardiovascular structure and function or quality of life in HF patients NYHA II-III. Nederlands Trial Register NTR3671.
Burgess, Malcolm I; Jenkins, Carly; Chan, Jonathan; Marwick, Thomas H
2007-01-01
Background Real‐time three‐dimensional echocardiography (RT3DE) is an alternative modality to tissue Doppler imaging (TDI) for assessment of intraventricular dyssynchrony but its role is yet to be defined. Objectives To (1) compare RT3DE and TDI for assessment of intraventricular dyssynchrony; (2) determine whether the two techniques agreed regarding the magnitude of dyssynchrony and identification of the site of maximal mechanical delay; and (3) investigate the reason for disagreement. Patients 100 patients with ischaemic cardiomyopathy. Setting Tertiary referral cardiac unit. Main outcome measures Dispersion in time interval from QRS onset to peak sustained systolic tissue velocity by TDI (SD‐TTV) and to minimal systolic volume by RT3DE (SD‐T3D) between 12 ventricular segments. Results RT3DE image quality was adequate for measurement of SD‐T3D in 77 (77%) patients. In the whole population, SD‐TTV was 40 (20) ms and SD‐T3D was 8.3% (3.4%). RT3DE identified a smaller proportion of patients as having significant dyssynchrony than TDI (49 (64%) patients vs 32 (42%) patients; p<0.01). The correlation between SD‐TTV and SD‐T3D was poor (r = 0.11, p = NS). There was concordance between TDI and RT3DE in identifying the site of maximal mechanical delay in 12 (16%) patients. Validating the two techniques with anatomical M‐mode (AMM) as a parameter of radial timing revealed better agreement with RT3DE than with TDI (χ2 = 11.8, p = 0.001). Conclusion In patients with ischaemic cardiomyopathy, TDI and RT3DE show poor agreement for evaluating the magnitude of intraventricular dyssynchrony and the site of maximal mechanical delay. This may partly relate to their respective assessment of longitudinal versus radial timing. PMID:17344326
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit
2015-01-01
Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Gravity packaging final waste recovery based on gravity separation and chemical imaging control.
Bonifazi, Giuseppe; Serranti, Silvia; Potenza, Fabio; Luciani, Valentina; Di Maio, Francesco
2017-02-01
Plastic polymers are characterized by a high calorific value. Post-consumer plastic waste can be thus considered, in many cases, as a typical secondary solid fuels according to the European Commission directive on End of Waste (EoW). In Europe the practice of incineration is considered one of the solutions for waste disposal waste, for energy recovery and, as a consequence, for the reduction of waste sent to landfill. A full characterization of these products represents the first step to profitably and correctly utilize them. Several techniques have been investigated in this paper in order to separate and characterize post-consumer plastic packaging waste fulfilling the previous goals, that is: gravity separation (i.e. Reflux Classifier), FT-IR spectroscopy, NIR HyperSpectralImaging (HSI) based techniques and calorimetric test. The study demonstrated as the proposed separation technique and the HyperSpectral NIR Imaging approach allow to separate and recognize the different polymers (i.e. PolyVinyl Chloride (PVC), PolyStyrene (PS), PolyEthylene (PE), PoliEtilene Tereftalato (PET), PolyPropylene (PP)) in order to maximize the removal of the PVC fraction from plastic waste and to perform the full quality control of the resulting products, can be profitably utilized to set up analytical/control strategies finalized to obtain a low content of PVC in the final Solid Recovered Fuel (SRF), thus enhancing SRF quality, increasing its value and reducing the "final waste". Copyright © 2016 Elsevier Ltd. All rights reserved.
A Note on Maximized Posttest Contrasts.
ERIC Educational Resources Information Center
Williams, John D.
1979-01-01
Hollingsworth recently showed a posttest contrast for analysis of variance situations that, for equal sample sizes, had several favorable qualities. However, for unequal sample sizes, the contrast fails to achieve status as a maximized contrast; thus, separate testing of the contrast is required. (Author/GSK)
Maximizing plant density affects broccoli yield and quality
USDA-ARS?s Scientific Manuscript database
Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
Automated Prescription of Oblique Brain 3D MRSI
Ozhinsky, Eugene; Vigneron, Daniel B.; Chang, Susan M.; Nelson, Sarah J.
2012-01-01
Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to completely automate the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of OVS saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from 6 exams from 3 healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. PMID:22692829
NASA Astrophysics Data System (ADS)
Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.
2017-09-01
The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.
Thermal Model Development for an X-Ray Mirror Assembly
NASA Technical Reports Server (NTRS)
Bonafede, Joseph A.
2015-01-01
Space-based x-ray optics require stringent thermal environmental control to achieve the desired image quality. Future x-ray telescopes will employ hundreds of nearly cylindrical, thin mirror shells to maximize effective area, with each shell built from small azimuthal segment pairs for manufacturability. Thermal issues with these thin optics are inevitable because the mirrors must have a near unobstructed view of space while maintaining near uniform 20 C temperature to avoid thermal deformations. NASA Goddard has been investigating the thermal characteristics of a future x-ray telescope with an image requirement of 5 arc-seconds and only 1 arc-second focusing error allocated for thermal distortion. The telescope employs 135 effective mirror shells formed from 7320 individual mirror segments mounted in three rings of 18, 30, and 36 modules each. Thermal requirements demand a complex thermal control system and detailed thermal modeling to verify performance. This presentation introduces innovative modeling efforts used for the conceptual design of the mirror assembly and presents results demonstrating potential feasibility of the thermal requirements.
Khalique, Omar K; Pulerwitz, Todd C; Halliburton, Sandra S; Kodali, Susheel K; Hahn, Rebecca T; Nazif, Tamim M; Vahl, Torsten P; George, Isaac; Leon, Martin B; D'Souza, Belinda; Einstein, Andrew J
2016-01-01
Transcatheter aortic valve replacement (TAVR) is performed frequently in patients with severe, symptomatic aortic stenosis who are at high risk or inoperable for open surgical aortic valve replacement. Computed tomography angiography (CTA) has become the gold standard imaging modality for pre-TAVR cardiac anatomic and vascular access assessment. Traditionally, cardiac CTA has been most frequently used for assessment of coronary artery stenosis, and scanning protocols have generally been tailored for this purpose. Pre-TAVR CTA has different goals than coronary CTA and the high prevalence of chronic kidney disease in the TAVR patient population creates a particular need to optimize protocols for a reduction in iodinated contrast volume. This document reviews details which allow the physician to tailor CTA examinations to maximize image quality and minimize harm, while factoring in multiple patient and scanner variables which must be considered in customizing a pre-TAVR protocol. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Optical surface properties and their RF limitations of European XFEL cavities
NASA Astrophysics Data System (ADS)
Wenskat, Marc
2017-10-01
The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. The industrial fabrication of cavities for the European X-ray Free Electron Laser and the International Linear Collider HiGrade Research Project allowed for an investigation of this interplay. For the serial inspection of the inner surface, the optical inspection robot ’optical bench for automated cavity inspection with high resolution on short timescales’ OBACHT was constructed and to analyze the large amount of data, represented in the images of the inner surface, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. This quantitative analysis identified vendor-specific surface properties which allow the performance of quality control and assurance during production. In addition, a strong negative correlation of ρ =-0.93 with a significance of 6 σ of the integrated grain boundary area \\sum {A} versus the maximal achievable accelerating field {{E}}{acc,\\max } has been found.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
Landheer, Karl; Johns, Paul C
2012-09-01
Traditional projection x-ray imaging utilizes only the information from the primary photons. Low-angle coherent scatter images can be acquired simultaneous to the primary images and provide additional information. In medical applications scatter imaging can improve x-ray contrast or reduce dose using information that is currently discarded in radiological images to augment the transmitted radiation information. Other applications include non-destructive testing and security. A system at the Canadian Light Source synchrotron was configured which utilizes multiple pencil beams (up to five) to create both primary and coherent scatter projection images, simultaneously. The sample was scanned through the beams using an automated step-and-shoot setup. Pixels were acquired in a hexagonal lattice to maximize packing efficiency. The typical pitch was between 1.0 and 1.6 mm. A Maximum Likelihood-Expectation Maximization-based iterative method was used to disentangle the overlapping information from the flat panel digital x-ray detector. The pixel value of the coherent scatter image was generated by integrating the radial profile (scatter intensity versus scattering angle) over an angular range. Different angular ranges maximize the contrast between different materials of interest. A five-beam primary and scatter image set (which had a pixel beam time of 990 ms and total scan time of 56 min) of a porcine phantom is included. For comparison a single-beam coherent scatter image of the same phantom is included. The muscle-fat contrast was 0.10 ± 0.01 and 1.16 ± 0.03 for the five-beam primary and scatter images, respectively. The air kerma was measured free in air using aluminum oxide optically stimulated luminescent dosimeters. The total area-averaged air kerma for the scan was measured to be 7.2 ± 0.4 cGy although due to difficulties in small-beam dosimetry this number could be inaccurate.
Alignment by Maximization of Mutual Information
1995-06-01
Davi Geiger, David Chapman, Jose Robles, Tao Alter, Misha Bolotski, Jonathan Connel, Karen Sarachik, Maja Mataric , Ian Horswill, Colin Angle...the same pose. These images are very different and are in fact anti-correlated: bright pixels in the left image correspond to dark pixels in the right...image; dark pixels in the left image correspond to bright pixels in the right image. No variant of correlation could match these images together
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
NASA Astrophysics Data System (ADS)
Lestari Widaningrum, Dyah
2014-03-01
This research aims to investigate the importance of take-out food packaging attributes, using conjoint analysis and QFD approach among consumers of take-out food products in Jakarta, Indonesia. The conjoint results indicate that perception about packaging material (such as paper, plastic, and polystyrene foam) plays the most important role overall in consumer perception. The clustering results that there is strong segmentation in which take-out food packaging material consumer consider most important. Some consumers are mostly oriented toward the colour of packaging, while another segment of customers concerns on packaging shape and packaging information. Segmentation variables based on packaging response can provide very useful information to maximize image of products through the package's impact. The results of House of Quality development described that Conjoint Analysis - QFD is a useful combination of the two methodologies in product development, market segmentation, and the trade off between customers' requirements in the early stages of HOQ process
MO-F-CAMPUS-J-04: One-Year Analysis of Elekta CBCT Image Quality Using NPS and MTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakahara, S; Tachibana, M; Watanabe, Y
2015-06-15
Purpose: To compare quantitative image quality (IQ) evaluation methods using Noise Power Spectrum (NPS) and Modulation Transfer Function (MTF) with standard IQ analyses for minimizing the observer subjectivity of the standard methods and maximizing the information content. Methods: For our routine IQ tests of Elekta XVI Cone-Beam CT, image noise was quantified by the standard deviation of CT number (CT#) (Sigma) over a small area in an IQ test phantom (CatPhan), and the high spatial resolution (HSR) was evaluated by the number of line-pairs (LP#) visually recognizable on the image. We also measured the image uniformity, the low contrast resolutionmore » ratio, and the distances of two points for geometrical accuracy. For this study, we did additional evaluation of the XVI data for 12 monthly IQ tests by using NPS for noise, MTF for HSR, and the CT#-to-density relationship. NPS was obtained by applying Fourier analysis in a small area on the uniformity test section of CatPhan. The MTF analysis was performed by applying the Droege-Morin (D-M) method to the line pairs on the phantom. The CT#-to-density was obtained for inserts in the low-contrast test section of the phantom. Results: All the quantities showed a noticeable change over the one-year period. Especially the noise level changed significantly after a repair of the imager. NPS was more sensitive to the IQ change than Sigma. MTF could provide more quantitative and objective evaluation of the HSR. The CT# was very different from the expected CT#; but, the CT#-to-density curves were constant within 5% except two months. Conclusion: Since the D-M method is easy to implement, we recommend using MTF instead of the LP# even for routine periodic QA. The month-to-month variation of IQ was not negligible; hence a routine IQ test must be performed, particularly after any modification of hardware including detector calibration.« less
DOT National Transportation Integrated Search
2016-04-01
Videolog and pavement imaging data is a valuable asset that has supported the Georgia Department of : Transportation (GDOT) and enable it to fulfill the requirements of its Highway Performance Monitoring : System (HPMS). To maximize the return on inv...
Multispectral imaging of aircraft exhaust
NASA Astrophysics Data System (ADS)
Berkson, Emily E.; Messinger, David W.
2016-05-01
Aircraft pollutants emitted during the landing-takeoff (LTO) cycle have significant effects on the local air quality surrounding airports. There are currently no inexpensive, portable, and unobtrusive sensors to quantify the amount of pollutants emitted from aircraft engines throughout the LTO cycle or to monitor the spatial-temporal extent of the exhaust plume. We seek to thoroughly characterize the unburned hydrocarbon (UHC) emissions from jet engine plumes and to design a portable imaging system to remotely quantify the emitted UHCs and temporally track the distribution of the plume. This paper shows results from the radiometric modeling of a jet engine exhaust plume and describes a prototype long-wave infrared imaging system capable of meeting the above requirements. The plume was modeled with vegetation and sky backgrounds, and filters were selected to maximize the detectivity of the plume. Initial calculations yield a look-up chart, which relates the minimum amount of emitted UHCs required to detect the presence of a plume to the noise-equivalent radiance of a system. Future work will aim to deploy the prototype imaging system at the Greater Rochester International Airport to assess the applicability of the system on a national scale. This project will help monitor the local pollution surrounding airports and allow better-informed decision-making regarding emission caps and pollution bylaws.
Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su
2015-10-01
Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.
Elbes, Delphine; Magat, Julie; Govari, Assaf; Ephrath, Yaron; Vieillot, Delphine; Beeckler, Christopher; Weerasooriya, Rukshen; Jais, Pierre; Quesson, Bruno
2017-03-01
Interventional cardiac catheter mapping is routinely guided by X-ray fluoroscopy, although radiation exposure remains a significant concern. Feasibility of catheter ablation for common flutter has recently been demonstrated under magnetic resonance imaging (MRI) guidance. The benefit of catheter ablation under MRI could be significant for complex arrhythmias such as atrial fibrillation (AF), but MRI-compatible multi-electrode catheters such as Lasso have not yet been developed. This study aimed at demonstrating the feasibility and safety of using a multi-electrode catheter [magnetic resonance (MR)-compatible Lasso] during MRI for cardiac mapping. We also aimed at measuring the level of interference between MR and electrophysiological (EP) systems. Experiments were performed in vivo in sheep (N = 5) using a multi-electrode, circular, steerable, MR-compatible diagnostic catheter. The most common MRI sequences (1.5T) relevant for cardiac examination were run with the catheter positioned in the right atrium. High-quality electrograms were recorded while imaging with a maximal signal-to-noise ratio (peak-to-peak signal amplitude/peak-to-peak noise amplitude) ranging from 5.8 to 165. Importantly, MRI image quality was unchanged. Artefacts induced by MRI sequences during mapping were demonstrated to be compatible with clinical use. Phantom data demonstrated that this 10-pole circular catheter can be used safely with a maximum of 4°C increase in temperature. This new MR-compatible 10-pole catheter appears to be safe and effective. Combining MR and multipolar EP in a single session offers the possibility to correlate substrate information (scar, fibrosis) and EP mapping as well as online monitoring of lesion formation and electrical endpoint. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Digital PET compliance to EARL accreditation specifications.
Koopman, Daniëlle; Groot Koerkamp, Maureen; Jager, Pieter L; Arkies, Hester; Knollema, Siert; Slump, Cornelis H; Sanches, Pedro G; van Dalen, Jorn A
2017-12-01
Our aim was to evaluate if a recently introduced TOF PET system with digital photon counting technology (Philips Healthcare), potentially providing an improved image quality over analogue systems, can fulfil EANM research Ltd (EARL) accreditation specifications for tumour imaging with FDG-PET/CT. We have performed a phantom study on a digital TOF PET system using a NEMA NU2-2001 image quality phantom with six fillable spheres. Phantom preparation and PET/CT acquisition were performed according to the European Association of Nuclear Medicine (EANM) guidelines. We made list-mode ordered-subsets expectation maximization (OSEM) TOF PET reconstructions, with default settings, three voxel sizes (4 × 4 × 4 mm 3 , 2 × 2 × 2 mm 3 and 1 × 1 × 1 mm 3 ) and with/without point spread function (PSF) modelling. On each PET dataset, mean and maximum activity concentration recovery coefficients (RC mean and RC max ) were calculated for all phantom spheres and compared to EARL accreditation specifications. The RCs of the 4 × 4 × 4 mm 3 voxel dataset without PSF modelling proved closest to EARL specifications. Next, we added a Gaussian post-smoothing filter with varying kernel widths of 1-7 mm. EARL specifications were fulfilled when using kernel widths of 2 to 4 mm. TOF PET using digital photon counting technology fulfils EARL accreditation specifications for FDG-PET/CT tumour imaging when using an OSEM reconstruction with 4 × 4 × 4 mm 3 voxels, no PSF modelling and including a Gaussian post-smoothing filter of 2 to 4 mm.
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki; Himuro, Kazuhiko
2015-04-15
Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitativelymore » consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually superior to SPECT image because of the low background noise. The simulation reveals that the detected photon number in SPECT is comparable to that of PET, but the large fraction (approximately 75%) of scattered and penetration photons contaminates SPECT image. The lower limit of {sup 90}Y detection in SPECT image was approximately 200 kBq/ml, while that in PET image was approximately 100 kBq/ml. Conclusions: By comparing the background noise level and the image concentration profile of both the techniques, PET image quality was determined to be superior to that of bremsstrahlung SPECT. The developed simulation codes will be very useful in the future investigations of PET and bremsstrahlung SPECT imaging of {sup 90}Y.« less
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Luquette, Richard J.; Sanner, Robert M.
2003-01-01
Precision Formation Flying is an enabling technology for a variety of proposed space-based observatories, including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), the associated MAXIM pathfinder mission, and the Stellar Imager. An essential element of the technology is the control algorithm. This paper discusses the development of a nonlinear, six-degree of freedom (6DOF) control algorithm for maintaining the relative position and attitude of a spacecraft within a formation. The translation dynamics are based on the equations of motion for the restricted three body problem. The control law guarantees the tracking error convergences to zero, based on a Lyapunov analysis. The simulation, modelled after the MAXIM Pathfinder mission, maintains the relative position and attitude of a Follower spacecraft with respect to a Leader spacecraft, stationed near the L2 libration point in the Sun-Earth system.
Optimized image acquisition for breast tomosynthesis in projection and reconstruction space.
Chawla, Amarpreet S; Lo, Joseph Y; Baker, Jay A; Samei, Ehsan
2009-11-01
Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular span, the performance rolled off beyond a certain number of projections, indicating that simply increasing the number of projections in tomosynthesis may not necessarily improve its performance. The best performance for both projection images and tomosynthesis slices was obtained for 15-17 projections spanning an angular are of approximately 45 degrees--the maximum tested in our study, and for an acquisition dose equal to single-view mammography. The optimization framework developed in this framework is applicable to other reconstruction techniques and other multiprojection systems.
Texture Classification by Texton: Statistical versus Binary
Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane
2014-01-01
Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
Fertilizer placement to maximize nitrogen use by fescue
USDA-ARS?s Scientific Manuscript database
The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Dynamic Image Forces Near a Metal Surface and the Point-Charge Motion
ERIC Educational Resources Information Center
Gabovich, A. M.; Voitenko, A. I.
2012-01-01
The problem of charge motion governed by image force attraction near a plane metal surface is considered and solved self-consistently. The temporal dispersion of metal dielectric permittivity makes the image forces dynamic and, hence, finite, contrary to the results of the conventional approach. Therefore, the maximal attainable velocity turns out…
NASA Astrophysics Data System (ADS)
Zhu, Junjie
2017-02-01
Localized surface plasmon resonances arising from the free carriers in copper-deficient copper chalcogenides nanocrystals (Cu2-xE, E=S,Se) enables them with high extinction coefficient in the near-infrared range, which was superior for photothermal related purpose. Although Cu2-xE nanocrystals with different compositions (0< x≪1) all possess NIR absorption, their extinction coefficients were significantly different due to their distinct valence band free carrier concentration. Herein, by optimizing the synthetic conditions, we were able to obtain pure covellite phase CuS nanoparticles with maximized free carrier concentration (x=1), which provides extremely high mass extinction coefficient (up to 60 Lg-1cm-1 at 980 nm and 32.4 Lg-1cm-1 at 800 nm). To the best of our knowledge, these values was maximal among all inorganic nanomaterials. High quality Cu2-xSe can also be obtained with a similar approach. In order to introduce CuS nanocrystals for biomedical applications, we further transferred these nanocrystals into aqueous solution with an amphiphilic polymer and colvalently linked with beta-cyclodextrin. Using host-guest interaction, adamantine-modified RGD peptide can be further anchored on the nanoparticles for the recognition of integrin-positive cancer cells. Together with the high extinction coefficient and outstand photothermal conversion efficiency (determined to be higher than 40%), these CuS nanocrystals were applied for photothermal therapy of cancer cells and photoacoustic imaging. In addition, anticancer drug doxorubicin can also be loading onto the nanoparticles through either hydrophobic or electrostatic interaction for chemotherapy.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
Why is Improving Water Quality in the Gulf of Mexico so Critical?
The EPA regional offices and the Gulf of Mexico Program work with Gulf States to continue to maximize the efficiency and utility of water quality monitoring efforts for local managers by coordinating and standardizing state and federal water quality data
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... or market value characteristics and the credit quality of transferred financial assets (together with... with maximizing the net present value of the financial asset. Servicers shall have the authority to modify assets to address reasonably foreseeable default, and to take other action to maximize the value...
The Hood River Farmers Irrigation District used $36.2 million in CWSRF loans for a multiple-year endeavor to convert the open canal system to a piped, pressurized irrigation system to maximize water conservation and restore reliable water delivery to crops
NASA Astrophysics Data System (ADS)
N'Diaye, M.; Martinache, F.; Jovanovic, N.; Lozi, J.; Guyon, O.; Norris, B.; Ceau, A.; Mary, D.
2018-02-01
Context. Island effect (IE) aberrations are induced by differential pistons, tips, and tilts between neighboring pupil segments on ground-based telescopes, which severely limit the observations of circumstellar environments on the recently deployed exoplanet imagers (e.g., VLT/SPHERE, Gemini/GPI, Subaru/SCExAO) during the best observing conditions. Caused by air temperature gradients at the level of the telescope spiders, these aberrations were recently diagnosed with success on VLT/SPHERE, but so far no complete calibration has been performed to overcome this issue. Aims: We propose closed-loop focal plane wavefront control based on the asymmetric Fourier pupil wavefront sensor (APF-WFS) to calibrate these aberrations and improve the image quality of exoplanet high-contrast instruments in the presence of the IE. Methods: Assuming the archetypal four-quadrant aperture geometry in 8 m class telescopes, we describe these aberrations as a sum of the independent modes of piston, tip, and tilt that are distributed in each quadrant of the telescope pupil. We calibrate these modes with the APF-WFS before introducing our wavefront control for closed-loop operation. We perform numerical simulations and then experimental tests on a real system using Subaru/SCExAO to validate our control loop in the laboratory and on-sky. Results: Closed-loop operation with the APF-WFS enables the compensation for the IE in simulations and in the laboratory for the small aberration regime. Based on a calibration in the near infrared, we observe an improvement of the image quality in the visible range on the SCExAO/VAMPIRES module with a relative increase in the image Strehl ratio of 37%. Conclusions: Our first IE calibration paves the way for maximizing the science operations of the current exoplanet imagers. Such an approach and its results prove also very promising in light of the Extremely Large Telescopes (ELTs) and the presence of similar artifacts with their complex aperture geometry.
The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.
Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam
2015-04-07
The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.
Littlejohn, George R.; Mansfield, Jessica C.; Christmas, Jacqueline T.; Witterick, Eleanor; Fricker, Mark D.; Grant, Murray R.; Smirnoff, Nicholas; Everson, Richard M.; Moger, Julian; Love, John
2014-01-01
Plant leaves are optically complex, which makes them difficult to image by light microscopy. Careful sample preparation is therefore required to enable researchers to maximize the information gained from advances in fluorescent protein labeling, cell dyes and innovations in microscope technologies and techniques. We have previously shown that mounting leaves in the non-toxic, non-fluorescent perfluorocarbon (PFC), perfluorodecalin (PFD) enhances the optical properties of the leaf with minimal impact on physiology. Here, we assess the use of the PFCs, PFD, and perfluoroperhydrophenanthrene (PP11) for in vivo plant leaf imaging using four advanced modes of microscopy: laser scanning confocal microscopy (LSCM), two-photon fluorescence microscopy, second harmonic generation microscopy, and stimulated Raman scattering (SRS) microscopy. For every mode of imaging tested, we observed an improved signal when leaves were mounted in PFD or in PP11, compared to mounting the samples in water. Using an image analysis technique based on autocorrelation to quantitatively assess LSCM image deterioration with depth, we show that PP11 outperformed PFD as a mounting medium by enabling the acquisition of clearer images deeper into the tissue. In addition, we show that SRS microscopy can be used to image PFCs directly in the mesophyll and thereby easily delimit the “negative space” within a leaf, which may have important implications for studies of leaf development. Direct comparison of on and off resonance SRS micrographs show that PFCs do not to form intracellular aggregates in live plants. We conclude that the application of PFCs as mounting media substantially increases advanced microscopy image quality of living mesophyll and leaf vascular bundle cells. PMID:24795734
Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy
NASA Astrophysics Data System (ADS)
Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc
2014-12-01
Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.
DOT report for implementing OMB's information dissemination quality guidelines
DOT National Transportation Integrated Search
2002-08-01
Consistent with The Office of : Management and Budgets (OMB) Guidelines (for Ensuring and Maximizing the Quality, : Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies) : implementing Section 515 of the Treasury and...
Supercompensation Kinetics of Physical Qualities During a Taper in Team-Sport Athletes.
Marrier, Bruno; Robineau, Julien; Piscione, Julien; Lacome, Mathieu; Peeters, Alexis; Hausswirth, Christophe; Morin, Jean-Benoît; Le Meur, Yann
2017-10-01
Peaking for major competition is considered critical for maximizing team-sport performance. However, there is little scientific information available to guide coaches in prescribing efficient tapering strategies for team-sport players. To monitor the changes in physical performance in elite team-sport players during a 3-wk taper after a preseason training camp. Ten male international rugby sevens players were tested before (Pre) and after (Post) a 4-wk preseason training camp focusing on high-intensity training and strength training with moderate loads and once each week during a subsequent 3-wk taper. During each testing session, midthigh-pull maximal strength, sprint-acceleration mechanical outputs, and performance, as well as repeated-sprint ability (RSA), were assessed. At Post, no single peak performance was observed for maximal lower-limb force output and sprint performance, while RSA peaked for only 1 athlete. During the taper, 30-m-sprint time decreased almost certainly (-3.1% ± 0.9%, large), while maximal lower-limb strength and RSA, respectively, improved very likely (+7.7% ± 5.3%, small) and almost certainly (+9.0% ± 2.6%, moderate). Of the peak performances, 70%, 80%, and 80% occurred within the first 2 wk of taper for RSA, maximal force output, and sprint performance, respectively. These results show the sensitivity of physical qualities to tapering in rugby sevens players and suggest that an ~1- to 2-wk tapering time frame appears optimal to maximize the overall physical-performance response.
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Effects of dynamic hyperinflation on exercise capacity and quality of life in stable COPD patients.
Zhao, Li; Peng, Liyue; Wu, Baomei; Bu, Xiaoning; Wang, Chen
2016-09-01
Dynamic hyperinflation (DH) is an important pathophysiological characteristic of chronic obstructive pulmonary disease (COPD). There is increasing evidence that DH has negative effects on exercise performance and quality of life. The objective of this study was to explore effects of DH on exercise capacity and quality of life in stable COPD patients. Fifty-eight COPD patients and 20 matched healthy individuals underwent pulmonary function test, 6-min walk test and symptom-limited cardiopulmonary exercise test (CPET). End-expiratory lung volume/total lung capacity ratio (EELVmax/TLC) at peak exercise of CPET was evaluated, and EELVmax/TLC ≥ 75% was defined as 'severe dynamic hyperinflation (SDH)'. Of the 58 patients studied, 29 (50.0%) presented with SDH (SDH+ group, EELVmax/TLC 79.60 ± 3.60%), having worse maximal exercise capacity reflected by lower peakload, maximal oxygen uptake (VO2 max), maximal carbon dioxide output (VCO2 max) and maximal minute ventilation (VEmax) than did those without SDH (SDH- group, EELVmax/TLC 67.44 ± 6.53%). The EELVmax/TLC ratio at peak exercise had no association with variables of pulmonary function and 6-min walk distance (6MWD), but correlated inversely with peakload, VO2 max, VCO2 max and VEmax (r = -0.300~-0.351, P < 0.05). Although no significant differences were observed, patients with EELVmax/TLC ≥ 75% tended to have higher COPD assessment test score (15.07 ± 6.55 vs 13.28 ± 6.59, P = 0.303). DH develops variably during exercise and has a greater impact on maximal exercise capacity than 6MWD, even in those with the same extent of pulmonary function impairment at rest. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
ERIC Educational Resources Information Center
Remley, Dan; Goard, Linnette Mizer; Taylor, Christopher A.; Ralston, Robin A.
2015-01-01
Although many consumers perceive locally produced, fresh fruits and vegetables to be healthier, they might not have the knowledge and skills to retain optimal nutritional quality following harvest or purchase. We surveyed Ohio farmers market consumers' and managers' knowledge and interests related to maximizing nutritional value of produce.…
Concept Study Report: Extreme-Ultraviolet Imaging Spectrometer Solar-B
NASA Technical Reports Server (NTRS)
Doschek, George, A.; Brown, Charles M.; Davila, Joseph M.; Dere, Kenneth P.; Korendyke, Clarence M.; Mariska, John T.; Seely, John F.
1999-01-01
We propose a next generation Extreme-ultraviolet Imaging Spectrometer (EIS) that for the first time combines high spectral, spatial, and temporal resolution in a single solar spectroscopic instrument. The instrument consists of a multilayer-coated off-axis telescope mirror and a multilayer-coated grating spectrometer. The telescope mirror forms solar images on the spectrometer entrance slit assembly. The spectrometer forms stigmatic spectra of the solar region located at the slit. This region is selected by the articulated telescope mirror. Monochromatic images are obtained either by rastering the solar region across a narrow entrance slit, or by using a very wide slit (called a slot) in place of the slit. Monochromatic images of the region centered on the slot are obtained in a single exposure. Half of each optic is coated to maximize reflectance at 195 Angstroms; the other half to maximize reflectance at 270 Angstroms. The two Extreme Ultraviolet (EUV) wavelength bands have been selected to maximize spectral and dynamical and plasma diagnostic capabilities. Spectral lines are observed that are formed over a temperature range from about 0.1 MK to about 20 MK. The main EIS instrument characteristics are: wavelength bands - 180 to 204 Angstroms; 250 to 290 Angstroms; spectral resolution - 0.0223 Angstroms/pixel (34.3km/s at 195 Angstroms and 23.6 km/s at 284 Angstroms); slit dimensions - 4 slits, two currently specified dimensions are 1" x 1024" and 50" x 1024" (the slot); largest spatial field of view in a single exposure - 50" x 1024"; highest time resolution for active region velocity studies - 4.4 s.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu
2016-02-22
We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
NASA Astrophysics Data System (ADS)
Roman, M. O.; Wang, Z.; Kalb, V.; Cole, T.; Oda, T.; Stokes, E.; Molthan, A.
2016-12-01
A new generation of satellite instruments, represented by the Suomi National Polar-Orbiting Partnership (Suomi-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS), offer global measurements of nocturnal visible and near-infrared light suitable for urban science research. While many promising urban-focused applications have been developed using nighttime satellite imagery in the past 25 years, most studies to-date have been limited by the quality of the captured imagery and the retrieval methods used in heritage (DMSP/OLS) products. Instead, science-quality products that are temporally consistent, global in extent, and local in resolution were needed to monitor human settlements worldwide —particularly for studies within dense urban areas. Since the first-light images from the VIIRS were received in January 2012, the NASA Land Science Investigator-led Processing System (Land SIPS) team has worked on maximizing the capabilities of these low-light measurements to generate a wealth of new information useful for understanding urbanization processes, urban functions, and the vulnerability of urban areas to climate hazards. In a recent case study, our team demonstrated that tracking daily dynamic VIIRS nighttime measurements can provide valuable information about the character of the human activities and behaviors that shape energy consumption and vulnerability (Roman and Stokes, 2015). Moving beyond mapping the physical qualities of urban areas (e.g. land cover and impervious area), VIIRS measurements provide insight into the social, economic, and cultural activities that shape energy and infrastructure use. Furthermore, as this time series expands and is merged with other sources of optical remote sensing data (e.g., Landsat-8 and Sentinel 2), VIIRS has the potential to increase our understanding of changes in urban form, structure, and infrastructure—factors that may also influence urban resilience—and how the increasing frequency and severity of climate-related hazards can ultimately affect development pathways and urban policies in the long term.
Silkosessak, O; Jacobs, R; Bogaerts, R; Bosmans, H; Panmekiate, S
2014-01-01
Objectives: To determine the optimal kVp setting for a particular cone beam CT (CBCT) device by maximizing technical image quality at a fixed radiation dose. Methods: The 3D Accuitomo 170 (J. Morita Mfg. Corp., Kyoto, Japan) CBCT was used. The radiation dose as a function of kVp was measured in a cylindrical polymethyl methacrylate (PMMA) phantom using a small-volume ion chamber. Contrast-to-noise ratio (CNR) was measured using a PMMA phantom containing four materials (air, aluminium, polytetrafluoroethylene and low-density polyethylene), which was scanned using 180 combinations of kVp/mA, ranging from 60/1 to 90/8. The CNR was measured for each material using PMMA as background material. The pure effect of kVp and mAs on the CNR values was analysed. Using a polynomial fit for CNR as a function of mA for each kVp value, the optimal kVp was determined at five dose levels. Results: Absorbed doses ranged between 0.034 mGy mAs−1 (14 × 10 cm, 60 kVp) and 0.108 mGy mAs−1 (14 × 10 cm, 90 kVp). The relation between kVp and dose was quasilinear (R2 > 0.99). The effect of mA and kVp on CNR could be modelled using a second-degree polynomial. At a fixed dose, there was a tendency for higher CNR values at increasing kVp values, especially at low dose levels. A dose reduction through mA was more efficient than an equivalent reduction through kVp in terms of image quality deterioration. Conclusions: For the investigated CBCT model, the most optimal contrast at a fixed dose was found at the highest available kVp setting. There is great potential for dose reduction through mA with a minimal loss in image quality. PMID:24708447
Surveillance and monitoring in breast cancer survivors: maximizing benefit and minimizing harm.
Jochelson, Maxine; Hayes, Daniel F; Ganz, Patricia A
2013-01-01
Although the incidence of breast cancer has increased, breast cancer mortality has decreased, likely as a result of both breast cancer screening and improved treatment. There are well over two million breast cancer survivors in the United States for whom appropriate surveillance continues to be a subject of controversy. The guidelines from the American Society of Clinical Oncology (ASCO) and the American College of Physicians are clear: only performance of yearly screening mammography is supported by evidence. Although advanced imaging technologies and sophisticated circulating tumor biomarker studies are exquisitely sensitive for the detection of recurrent breast cancer, there is no proof that earlier detection of metastases will improve outcome. A lack of specificity may lead to more tests and patient anxiety. Many breast cancer survivors are not followed by oncologists, and their doctors may not be familiar with these recommendations. Oncologists also disregard the data. A plethora of both blood tests and nonmammographic imaging tests are frequently performed in asymptomatic women. The blood tests, marker studies, and advanced imaging techniques are expensive and, with limited health care funds, may prevent funding for more appropriate aspects of patient care. Abnormal marker studies lead to additional imaging procedures. Repeated CT scans and radionuclide imaging may induce a second cancer because of the radiation dose, and invasive procedures performed as a result of these examinations also add risk to patients without clear benefits. Improved adherence to the current guidelines can cut costs, reduce risks, and improve patient quality of life without adversely affecting outcome.
Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus
2016-12-01
The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
P10.05 Establishment of team work awake craniotomy: clinical experience in Taiwan
Chen, P.; Chang, W.; Chao, Y.; Toh, C.; Wei, K.
2017-01-01
Abstract Introduction: Awake craniotomy provides the opportunity to maximize both extent of resection and preservation of neurological function. Serial preoperative and postoperative neurobehavial evaluation, magnetic resonance image examination and intraoperative task investigation need multidisciplinary experts to cooperate. Materials and Methods: From 2013, we gradually establish our team for awake craniotomy. Patient who had brain tumor with the symptom of aphasia or hemiparesis and are willing to cooperate would be entered the protocol of awake craniotomy. Patients would receive complete preoperative neurobehavial examination by psychologists and speech therapists and magnetic resonance image included diffuse tensor image. During operation, Patients went through asleep-awake-asleep anesthetic techniques. Direct electric stimulation was used for both cortical and subcortical mapping. Navigation included information of lesion and important fiber tract guided the direction of excision. Rehabilitation doctor performed the tasks and decided the positive response caused by stimulation or excisional procedure. After operation, post-operative image and neurobehavial examination would be performed within one week, 3 months, 6 months and one year later Results: We scheduled awake craniotomy on almost every Tuesday. In recent 89 patients who received awake craniotomy, Twenty-five participants with recurrent tumor underwent the operation. Seven patients received twice and one patient received three times of awake craniotomy. Two patients had controllable intraoperative seizure attack. Early termination of awake status was found in two patients due to general discomfort. Patients with modest preoperative performance status still benefit from the operation. Neurobehavioral functions improved over time and some specific feature correlate to certain aspect of quality of life. The grading of tumor and the extension of resection influence the recovery of neurobehavioral functions and progression free survival considerably. Conclusions: Awake craniotomy is a feasible and effective way to improve not only patient`s survival rate but also quality of life. A team with neurosurgeon, rehabilitation doctor, speech therapist, psychologist, anesthesiologist, nurses and other specialist is important to improve the quality of clinical care for patient who received awake craniotomy. This study is supported by Chang Gung Memorial Hospital with grant number: CMRPG3D0243
Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura
2016-10-01
Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can significantly impact image quality when settings are left near default values.
Advanced quality systems : probabilistic optimization for profit (Prob.O.Prof) software
DOT National Transportation Integrated Search
2009-04-01
Contractors constantly have to make decisions regarding how to maximize profit and minimize risk on paving projects. With more and more States adopting incentive/disincentive pay adjustment provisions for quality, as measured by various acceptance qu...
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
High-resolution, high-throughput imaging with a multibeam scanning electron microscope.
Eberle, A L; Mikula, S; Schalek, R; Lichtman, J; Knothe Tate, M L; Zeidler, D
2015-08-01
Electron-electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Image authentication using distributed source coding.
Lin, Yao-Chung; Varodayan, David; Girod, Bernd
2012-01-01
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.
Unbiased Estimation of Refractive State of Aberrated Eyes
Martin, Jesson; Vasudevan, Balamurali; Himebaugh, Nikole; Bradley, Arthur; Thibos, Larry
2011-01-01
To identify unbiased methods for estimating the target vergence required to maximize visual acuity based on wavefront aberration measurements. Experiments were designed to minimize the impact of confounding factors that have hampered previous research. Objective wavefront refractions and subjective acuity refractions were obtained for the same monochromatic wavelength. Accommodation and pupil fluctuations were eliminated by cycloplegia. Unbiased subjective refractions that maximize visual acuity for high contrast letters were performed with a computer controlled forced choice staircase procedure, using 0.125 diopter steps of defocus. All experiments were performed for two pupil diameters (3mm and 6mm). As reported in the literature, subjective refractive error does not change appreciably when the pupil dilates. For 3 mm pupils most metrics yielded objective refractions that were about 0.1D more hyperopic than subjective acuity refractions. When pupil diameter increased to 6 mm, this bias changed in the myopic direction and the variability between metrics also increased. These inaccuracies were small compared to the precision of the measurements, which implies that most metrics provided unbiased estimates of refractive state for medium and large pupils. A variety of image quality metrics may be used to determine ocular refractive state for monochromatic (635nm) light, thereby achieving accurate results without the need for empirical correction factors. PMID:21777601
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... the Office of the Commissioner on the implementation of the FDA Safety and Innovation Act, Business Impact of Outsourcing, Supplier Management Models that Work, Implementing Quality by Design (QbD... quality and management through the following topics: Beyond our Borders--Maximizing the Impact of FDA's...
Power transformation for enhancing responsiveness of quality of life questionnaire.
Zhou, YanYan Ange
2015-01-01
We investigate the effect of power transformation of raw scores on the responsiveness of quality of life survey. The procedure maximizes the paired t-test value on the power transformed data to obtain an optimal power range. The parallel between the Box-Cox transformation is also investigated for the quality of life data.
Synergies and Balance between Values Education and Quality Teaching
ERIC Educational Resources Information Center
Lovat, Terence J .
2010-01-01
The article will focus on the implicit values dimension that is evident in research findings concerning quality teaching. Furthermore, it sets out to demonstrate that maximizing the effects of quality teaching requires explicit attention to this values dimension and that this can be achieved through a well-crafted values education program.…
DairyBeef: maximizing quality and profits--a consistent food safety message.
Moore, D A; Kirk, J H; Klingborg, D J; Garry, F; Wailes, W; Dalton, J; Busboom, J; Sams, R W; Poe, M; Payne, M; Marchello, J; Looper, M; Falk, D; Wright, T
2004-01-01
To respond to meat safety and quality issues in dairy market cattle, a collaborative project team for 7 western states was established to develop educational resources providing a consistent meat safety and quality message to dairy producers, farm advisors, and veterinarians. The team produced an educational website and CD-ROM course that included videos, narrated slide sets, and on-farm tools. The objectives of this course were: 1) to help producers and their advisors understand market cattle food safety and quality issues, 2) help maintain markets for these cows, and 3) help producers identify ways to improve the quality of dairy cattle going to slaughter. DairyBeef. Maximizing Quality & Profits consists of 6 sections, including 4 core segments. Successful completion of quizzes following each core segment is required for participants to receive a certificate of completion. A formative evaluation of the program revealed the necessity for minor content and technological changes with the web-based course. All evaluators considered the materials relevant to dairy producers. After editing, course availability was enabled in February, 2003. Between February and May, 2003, 21 individuals received certificates of completion.
Kamioka, Minao; Sasaki, Motoki; Yamada, Kazutaka; Endo, Hideki; Oishi, Motoharu; Yuhara, Kazutoshi; Tomikawa, Sohei; Sugimoto, Miki; Oshida, Tatsuo; Kondoh, Daisuke; Kitamura, Nobuo
2017-01-24
The ranges of pronation/supination of forearms in raccoons, raccoon dogs and red pandas were nondestructively examined. Three carcasses of each species were used for CT analysis, and the left forearms were scanned with a CT scanner in two positions: maximal supination and maximal pronation. Scanning data were reconstructed into three-dimensional images, cross-sectional images were extracted at the position that shows the largest area in the distal part of ulna, and then, the centroids of each cross section of the radius and ulna were detected. CT images of two positions were superimposed, by overlapping the outlines of each ulna, and then, the centroids were connected by lines to measure the angle of rotation, as an index of range of mobility. The measurements in each animal were analyzed, using the Tukey-Kramer method. The average angle of rotation was largest in raccoons and smallest in raccoon dogs, and the difference was significant. In the maximally pronated forearm of all species, the posture was almost equal to the usual grounding position with palms touching the ground. Therefore, the present results demonstrate that the forearms of raccoons can supinate to a greater degree from the grounding position with palms on the ground, as compared with those of raccoon dogs and red pandas.
NASA Astrophysics Data System (ADS)
Cash, W.
With the general acceptance of black holes as real entities the astrophysics community has turned its attention to studying their behavior and properties. Because of the great distance and compact size of the central engine, astronomers are currently limited to spectroscopic analysis. But to take a picture, or better yet a movie, of the black hole in silhouette against its accretion disk would be a triumph of exploration and scientific inquiry. Probing to the event horizon is best accomplished in the x-ray band, where material primarily radiates in the last orbits before its final plunge. Not only will the signal be bright and minimally confused in the x-ray, but the size of the required interferometer drops dramatically. We describe MAXIM, the Micro-Arcsecond X-ray Imaging Mission, which is now being studied and developed by NASA. We will explain the preliminary mission concept which will use currently existing technology to achieve spatial resolution one million times higher than that of the Hubble Space Telescope and capture the image of an event horizon in a nearby Active Galactic Nucleus. We will also describe the Maxim Pathfinder. Designed as a stepping stone at resolution of 100 microarcseconds, it will demonstrate the techniques of x-ray interferometry and perform groundbreaking science like resolving the coronae of the nearby stars.
NASA Astrophysics Data System (ADS)
Cash, W.
With the general acceptance of black holes as real entities the astrophysics community has turned its attention to studying their behavior and properties. Because of the great distance and compact size of the central engine, astronomers are currently limited to spectroscopic analysis. But to take a picture, or better yet a movie, of the black hole in silhouette against its accretion disk would be a triumph of exploration and scientific inquiry. Probing to the event horizon is best accomplished in the x-ray band, where material primarily radiates in the last orbits before its final plunge. Not only will the signal be bright and minimally confused in the x-ray, but the size of the required interferometer drops dramatically. We describe MAXIM, the Micro-Arcsecond X-ray Imaging Mission, which is now being studied and developed by NASA. We will explain the preliminary mission concept which will use currently existing technology to achieve spatial resolution one million times higher than that of the Hubble Space Telescope and capture the image of an event horizon in a nearby Active Galactic Nucleus. We will also describe the Maxim Pathfinder. Designed as a stepping stone at resolution of 100 microarcseconds, it will demonstrate the techniques of xray- interferometry and perform groundbreaking science like resolving the coronae of the nearby stars.
NASA Astrophysics Data System (ADS)
Cash, W.
With the general acceptance of black holes as real entities the astrophysics community has turned its attention to studying their behavior and properties. Because of the great distance and compact size of the central engine, astronomers are limited to spectroscopic analysis. But to take a picture, or better yet a movie, of the black hole in silhouette against its accretion disk would be a triumph of exploration and scientific inquiry. Probing to the event horizon is best accomplished in the X-ray band, where material primarily radiates in the last orbits before its final plunge. Not only will the signal be bright and minimally confused in the X-ray, but the size of the required interferometer drops dramatically. We describe MAXIM, the Micro-Arcsecond X-ray Imaging Mission, which is now being studied and developed by NASA. We will explain the preliminary mission concept which will use currently existing technology to achieve spatial resolution one million times higher than that of the Hubble Space Telescope and capture the image of an event horizon in a nearby Active Galactic Nucleus. We will also describe the MAXIM Pathfinder. Designed as a stepping stone at resolution of 100 micro-arcseconds, it will demonstrate the techniques of X-ray interferometry and perform groundbreaking science like resolving the coronae of the nearby stars.
Rucker, F. J.; Osorio, D.
2009-01-01
Longitudinal chromatic aberration is a well-known imperfection of visual optics, but the consequences in natural conditions, and for the evolution of receptor spectral sensitivities are less well understood. This paper examines how chromatic aberration affects image quality in the middle-wavelength sensitive (M-) cones, viewing broad-band spectra, over a range of spatial frequencies and focal planes. We also model the effects on M-cone contrast of moving the M-cone fundamental relative to the long- and middle-wavelength (L- and M-cone) fundamentals, while the eye is accommodated at different focal planes or at a focal plane that maximizes luminance contrast. When the focal plane shifts towards longer (650 nm) or shorter wavelengths (420 nm) the effects on M-cone contrast are large: longitudinal chromatic aberration causes total loss of M-cone contrast above 10 to 20 c/d. In comparison, the shift of the M-cone fundamental causes smaller effects on M-cone contrast. At 10 c/d a shift in the peak of the M-cone spectrum from 560 nm to 460 nm decreases M-cone contrast by 30%, while a 10 nm blue-shift causes only a minor loss of contrast. However, a noticeable loss of contrast may be seen if the eye is focused at focal planes other than that which maximizes luminance contrast. The presence of separate long- and middle-wavelength sensitive cones therefore has a small, but not insignificant cost to the retinal image via longitudinal chromatic aberration. This aberration may therefore be a factor limiting evolution of visual pigments and trichromatic color vision. PMID:18639571
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magallanes, L., E-mail: lorena.magallanes@med.uni-heidelberg.de; Rinaldi, I., E-mail: ilaria.rinaldi@med.uni-heidelberg.de; Brons, S., E-mail: stephan.brons@med.uni-heidelberg.de
External beam radiotherapy techniques have the common aim to maximize the radiation dose to the target while sparing the surrounding healthy tissues. The inverted and finite depth-dose profile of ion beams (Bragg peak) allows for precise dose delivery and conformai dose distribution. Furthermore, increased radiobiological effectiveness of ions enhances the capability to battle radioresistant tumors. Ion beam therapy requires a precise determination of the ion range, which is particularly sensitive to range uncertainties. Therefore, novel imaging techniques are currently investigated as a tool to improve the quality of ion beam treatments. Approaches already clinically available or under development are basedmore » on the detection of secondary particles emitted as a result of nuclear reactions (e.g., positron-annihilation or prompt gammas, charged particles) or transmitted high energy primary ion beams. Transmission imaging techniques make use of the beams exiting the patient, which have higher initial energy and lower fluence than the therapeutic ones. At the Heidelberg Ion Beam Therapy Center, actively scanned energetic proton and carbon ion beams provide an ideal environment for the investigation of ion-based radiography and tomography. This contribution presents the rationale of ion beam therapy, focusing on the role of ion-based transmission imaging methods towards the reduction of range uncertainties and potential improvement of treatment planning.« less
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-01
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500 nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step.
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
Correction of patient motion in cone-beam CT using 3D-2D registration
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-12-01
Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was >0.995, with significant improvement (p < 0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.
Feasibility of an intracranial EEG-fMRI protocol at 3T: risk assessment and image quality.
Boucousis, Shannon M; Beers, Craig A; Cunningham, Cameron J B; Gaxiola-Valdez, Ismael; Pittman, Daniel J; Goodyear, Bradley G; Federico, Paolo
2012-11-15
Integrating intracranial EEG (iEEG) with functional MRI (iEEG-fMRI) may help elucidate mechanisms underlying the generation of seizures. However, the introduction of iEEG electrodes in the MR environment has inherent risk and data quality implications that require consideration prior to clinical use. Previous studies of subdural and depth electrodes have confirmed low risk under specific circumstances at 1.5T and 3T. However, no studies have assessed risk and image quality related to the feasibility of a full iEEG-fMRI protocol. To this end, commercially available platinum subdural grid/strip electrodes (4×5 grid or 1×8 strip) and 4 or 6-contact depth electrodes were secured to the surface of a custom-made phantom mimicking the conductivity of the human brain. Electrode displacement, temperature increase of electrodes and surrounding phantom material, and voltage fluctuations in electrode contacts were measured in a GE Discovery MR750 3T MR scanner during a variety of imaging sequences, typical of an iEEG-fMRI protocol. An electrode grid was also used to quantify the spatial extent of susceptibility artifact. The spatial extent of susceptibility artifact in the presence of an electrode was also assessed for typical imaging parameters that maximize BOLD sensitivity at 3T (TR=1500 ms; TE=30 ms; slice thickness=4mm; matrix=64×64; field-of-view=24 cm). Under standard conditions, all electrodes exhibited no measurable displacement and no clinically significant temperature increase (<1°C) during scans employed in a typical iEEG-fMRI experiment, including 60 min of continuous fMRI. However, high SAR sequences, such as fast spin-echo (FSE), produced significant heating in almost all scenarios (>2.0°C) that in some cases exceeded 10°C. Induced voltages in the frequency range that could elicit neuronal stimulation (<10 kHz) were well below the threshold of 100 mV. fMRI signal intensity was significantly reduced within 20mm of the electrodes for the imaging parameters used in this study. Thus, for the conditions tested, a full iEEG-fMRI protocol poses a low risk at 3T; however, fMRI sensitivity may be reduced immediately adjacent to the electrodes. In addition, high SAR sequences must be avoided. Copyright © 2012 Elsevier Inc. All rights reserved.
Doi, Ryoichi; Pitiwut, Supachai
2014-01-01
The concept of crop yield maximization has been widely supported. In practice, however, yield maximization does not necessarily lead to maximum socioeconomic welfare. Optimization is therefore necessary to ensure quality of life of farmers and other stakeholders. In Thailand, a rice farmers' network has adopted a promising agricultural system aimed at the optimization of rice farming. Various feasible techniques were flexibly combined. The new system offers technical strengths and minimizes certain difficulties with which the rice farmers once struggled. It has resulted in fairly good yields of up to 8.75 t ha−1 or yield increases of up to 57% (from 4.38 to 6.88 t ha−1). Under the optimization paradigm, the farmers have established diversified sustainable relationships with the paddy fields in terms of ecosystem management through their own self-motivated scientific observations. The system has resulted in good health conditions for the farmers and villagers, financial security, availability of extra time, and additional opportunities and freedom and hence in the improvement of their overall quality of life. The underlying technical and social mechanisms are discussed herein. PMID:25089294
Heterogeneous sharpness for cross-spectral face recognition
NASA Astrophysics Data System (ADS)
Cao, Zhicheng; Schmid, Natalia A.
2017-05-01
Matching images acquired in different electromagnetic bands remains a challenging problem. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images, known as cross-spectral face recognition. Among many unsolved issues is the one of quality disparity of the heterogeneous images. Images acquired in different spectral bands are of unequal image quality due to distinct imaging mechanism, standoff distances, or imaging environment, etc. To reduce the effect of quality disparity on the recognition performance, one can manipulate images to either improve the quality of poor-quality images or to degrade the high-quality images to the level of the quality of their heterogeneous counterparts. To estimate the level of discrepancy in quality of two heterogeneous images a quality metric such as image sharpness is needed. It provides a guidance in how much quality improvement or degradation is appropriate. In this work we consider sharpness as a relative measure of heterogeneous image quality. We propose a generalized definition of sharpness by first achieving image quality parity and then finding and building a relationship between the image quality of two heterogeneous images. Therefore, the new sharpness metric is named heterogeneous sharpness. Image quality parity is achieved by experimentally finding the optimal cross-spectral face recognition performance where quality of the heterogeneous images is varied using a Gaussian smoothing function with different standard deviation. This relationship is established using two models; one of them involves a regression model and the other involves a neural network. To train, test and validate the model, we use composite operators developed in our lab to extract features from heterogeneous face images and use the sharpness metric to evaluate the face image quality within each band. Images from three different spectral bands visible light, near infrared, and short-wave infrared are considered in this work. Both error of a regression model and validation error of a neural network are analyzed.
MO-FG-207-03: Maximizing the Utility of Integrated PET/MRI in Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behr, S.
2015-06-15
The use of integrated PET/MRI systems in clinical applications can best benefit from understanding their technological advances and limitations. The currently available clinical PET/MRI systems have their own characteristics. Thorough analyses of existing technical data and evaluation of necessary performance metrics for quality assurances could be conducted to optimize application-specific PET/MRI protocols. This Symposium will focus on technical advances and limitations of clinical PET/MRI systems, and how this exciting imaging modality can be utilized in applications that can benefit from both PET and MRI. Learning Objectives: To understand the technological advances of clinical PET/MRI systems To correctly identify clinical applicationsmore » that can benefit from PET/MRI To understand ongoing work to further improve the current PET/MRI technology Floris Jansen is a GE Healthcare employee.« less
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
Quiet echo planar imaging for functional and diffusion MRI
Price, Anthony N.; Cordero‐Grande, Lucilio; Malik, Shaihan; Ferrazzi, Giulio; Gaspar, Andreia; Hughes, Emer J.; Christiaens, Daan; McCabe, Laura; Schneider, Torben; Rutherford, Mary A.; Hajnal, Joseph V.
2017-01-01
Purpose To develop a purpose‐built quiet echo planar imaging capability for fetal functional and diffusion scans, for which acoustic considerations often compromise efficiency and resolution as well as angular/temporal coverage. Methods The gradient waveforms in multiband‐accelerated single‐shot echo planar imaging sequences have been redesigned to minimize spectral content. This includes a sinusoidal read‐out with a single fundamental frequency, a constant phase encoding gradient, overlapping smoothed CAIPIRINHA blips, and a novel strategy to merge the crushers in diffusion MRI. These changes are then tuned in conjunction with the gradient system frequency response function. Results Maintained image quality, SNR, and quantitative diffusion values while reducing acoustic noise up to 12 dB (A) is illustrated in two adult experiments. Fetal experiments in 10 subjects covering a range of parameters depict the adaptability and increased efficiency of quiet echo planar imaging. Conclusion Purpose‐built for highly efficient multiband fetal echo planar imaging studies, the presented framework reduces acoustic noise for all echo planar imaging‐based sequences. Full optimization by tuning to the gradient frequency response functions allows for a maximally time‐efficient scan within safe limits. This allows ambitious in‐utero studies such as functional brain imaging with high spatial/temporal resolution and diffusion scans with high angular/spatial resolution to be run in a highly efficient manner at acceptable sound levels. Magn Reson Med 79:1447–1459, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28653363
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
NASA Astrophysics Data System (ADS)
Vuori, Tero; Olkkonen, Maria
2006-01-01
The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.
NASA Astrophysics Data System (ADS)
Takahashi, Noriyuki; Kinoshita, Toshibumi; Ohmura, Tomomi; Matsuyama, Eri; Toyoshima, Hideto
2017-03-01
The early diagnosis of idiopathic normal pressure hydrocephalus (iNPH) considered as a treatable dementia is important. The iNPH causes enlargement of lateral ventricles (LVs). The degree of the enlargement of the LVs on CT or MR images is evaluated by using a diagnostic imaging criterion, Evans index. Evans index is defined as the ratio of the maximal width of frontal horns (FH) of the LVs to the maximal width of the inner skull (IS). Evans index is the most commonly used parameter for the evaluation of ventricular enlargement. However, manual measurement of Evans index is a time-consuming process. In this study, we present an automated method to compute Evans index on brain CT images. The algorithm of the method consisted of five major steps: standardization of CT data to an atlas, extraction of FH and IS regions, the search for the outmost points of bilateral FH regions, determination of the maximal widths of both the FH and the IS, and calculation of Evans index. The standardization to the atlas was performed by using linear affine transformation and non-linear wrapping techniques. The FH regions were segmented by using a three dimensional region growing technique. This scheme was applied to CT scans from 44 subjects, including 13 iNPH patients. The average difference in Evans index between the proposed method and manual measurement was 0.01 (1.6%), and the correlation coefficient of these data for the Evans index was 0.98. Therefore, this computerized method may have the potential to accurately compute Evans index for the diagnosis of iNPH on CT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan
2018-04-01
A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.
Harvester-based sensing system for cotton fiber-quality mapping
USDA-ARS?s Scientific Manuscript database
Precision agriculture in cotton production attempts to maximize profitability by exploiting information on field spatial variability to optimize the fiber yield and quality. For precision agriculture to be economically viable, collection of spatial variability data within a field must be automated a...
RFC: EPA's Action Plan for Bisphenol A Pursuant to EPA's Data Quality Guidelines
The American Chemistry Council (ACC) submits this Request for Correction to the U.S. Environmental Protection Agency under the Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
CT enterography: Mannitol versus VoLumen.
Wong, Jessica; Moore, Helen; Roger, Mark; McKee, Chris
2016-10-01
Several different neutral oral contrast agents have been trialled in magnetic resonance and CT enterography (CTE). In the Auckland region, Mannitol 2.5% and VoLumen are both used in CTE. This study compares the performance of these two neutral oral contrast agents in CTE. Computed tomography enterography data were collected from 25 consecutive studies that used either Mannitol or VoLumen in 2014. All images were reviewed by three radiologists blinded to the type of oral contrast. Each quadrant was assessed for maximum distension, proportion of bowel loops distended, presence of inhomogeneous content and bowel wall visibility. Assessment also included whether the contrast agent reached the caecum and an overall subjective quality assessment. Patients were invited to answer a questionnaire regarding tolerability of the preparations. Mannitol achieves better wall visibility in the right upper quadrant, left upper quadrant and left lower quadrant (P < 0.01). Overall differences in study quality favours Mannitol (P < 0.01) with 48% of the Mannitol studies being considered excellent compared with 4% of the VoLumen studies. There was no difference in maximal distension or proportion of loops distended. Mannitol in CTE achieves studies of a better quality than and is a viable alternative to VoLumen. © 2016 The Royal Australian and New Zealand College of Radiologists.
Naturalness and interestingness of test images for visual quality evaluation
NASA Astrophysics Data System (ADS)
Halonen, Raisa; Westman, Stina; Oittinen, Pirkko
2011-01-01
Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.
NASA Astrophysics Data System (ADS)
Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.
2010-01-01
Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.
Roadway Marking Optics for Autonomous Vehicle Guidance and Other Machine Vision Applications
NASA Astrophysics Data System (ADS)
Konopka, Anthony T.
This work determines optimal planar geometric light source and optical imager configurations and electromagnetic wavelengths for maximizing the reflected signal intensity when using machine vision technology to image roadway markings with embedded spherical glass beads. It is found through a first set of experiments that roadway marking samples exhibiting little or no bead rolling effects are uniformly reflective with respect to the azimuthal angle of observation when measured for retroreflectivity within industry standard 30-meter geometry. A second set of experiments indicate that white roadway markings exhibit higher reflectivity throughout the visible spectrum than yellow roadway markings. A roadway marking optical model capable of being used to determine optimal geometric light source and optical imager configurations for maximizing the reflected signal intensities of roadway marking targets is constructed and simulated using optical engineering software. It is found through a third set of experiments that high signal intensities can be measured when the polar angles of the light source and optical imager along a plane normal to a roadway marking are equal, with the maximum signal intensity being measured when the polar angles of both the light source and optical imager are 90°.
Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters
NASA Astrophysics Data System (ADS)
Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.
2016-06-01
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Movement measurement of isolated skeletal muscle using imaging microscopy
NASA Astrophysics Data System (ADS)
Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.
1997-05-01
An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Officer Professional Military Education: a New Distance Learning Evolution
2015-01-01
diversity, and maximize its most valued asset: human capital. Bibliography Broughton, R. (n.d.). Dr. Deming Point 13 Institute a Vigorous Program of...Education and Retraining. Retrieved 4 23, 2015, from Quality Assurance Solutions: www.quality-assurance- solutions.com/ Deming -Point-13.html
DOT National Transportation Integrated Search
2010-02-01
This project developed a methodology to couple a new pollutant dispersion model with a traffic : assignment process to contain air pollution while maximizing mobility. The overall objective of the air : quality modeling part of the project is to deve...
CZT sensors for Computed Tomography: from crystal growth to image quality
NASA Astrophysics Data System (ADS)
Iniewski, K.
2016-12-01
Recent advances in Traveling Heater Method (THM) growth and device fabrication that require additional processing steps have enabled to dramatically improve hole transport properties and reduce polarization effects in Cadmium Zinc Telluride (CZT) material. As a result high flux operation of CZT sensors at rates in excess of 200 Mcps/mm2 is now possible and has enabled multiple medical imaging companies to start building prototype Computed Tomography (CT) scanners. CZT sensors are also finding new commercial applications in non-destructive testing (NDT) and baggage scanning. In order to prepare for high volume commercial production we are moving from individual tile processing to whole wafer processing using silicon methodologies, such as waxless processing, cassette based/touchless wafer handling. We have been developing parametric level screening at the wafer stage to ensure high wafer quality before detector fabrication in order to maximize production yields. These process improvements enable us, and other CZT manufacturers who pursue similar developments, to provide high volume production for photon counting applications in an economically feasible manner. CZT sensors are capable of delivering both high count rates and high-resolution spectroscopic performance, although it is challenging to achieve both of these attributes simultaneously. The paper discusses material challenges, detector design trade-offs and ASIC architectures required to build cost-effective CZT based detection systems. Photon counting ASICs are essential part of the integrated module platforms as charge-sensitive electronics needs to deal with charge-sharing and pile-up effects.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
Skedgel, Chris; Wailoo, Allan; Akehurst, Ron
2015-01-01
Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.
Satellite image collection optimization
NASA Astrophysics Data System (ADS)
Martin, William
2002-09-01
Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
Celtikci, Emrah; Celtikci, Pinar; Fernandes-Cabral, David Tiago; Ucar, Murat; Fernandez-Miranda, Juan Carlos; Borcek, Alp Ozgun
2017-02-01
Thalamopeduncular tumors (TPTs) of childhood present a challenge for neurosurgeons due to their eloquent location. Preoperative fiber tracking provides total or near-total resection, without additional neurologic deficit. High-definition fiber tractography (HDFT) is an advanced white matter imaging technique derived from magnetic resonance imaging diffusion data, shown to overcome the limitations of diffusion tensor imaging. We aimed to investigate alterations of corticospinal tract (CST) and medial lemniscus (ML) caused by TPTs and to demonstrate the application of HDFT in preoperative planning. Three pediatric patients with TPTs were enrolled. CSTs and MLs were evaluated for displacement, infiltration, and disruption. The relationship of these tracts to tumors was identified and guided surgical planning. Literature was reviewed for publications on pediatric thalamic and TPTs that used diffusion imaging. Two patients had histologic diagnosis of pilocytic astrocytoma. One patient whose imaging suggested a low-grade glioma was managed conservatively. All tracts were displaced (1 CST anteriorly, 2 CSTs, 1 ML anteromedially, 1 ML medially, and 1 ML posteromedially). Literature review revealed 2 publications with 15 pilocytic astrocytoma cases, which investigated CST only. The condition of sensory pathway or anteromedial displacement of the CST in these tumors was not reported previously. Displacement patterns of the perilesional fiber bundles by TPTs are not predictable. Fiber tracking, preferably HDFT, should be part of preoperative planning to achieve maximal extent of resection for longer survival rates in this young group of patients, while preserving white matter tracts and thus quality of life. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulations of a micro-PET system based on liquid xenon
NASA Astrophysics Data System (ADS)
Miceli, A.; Glister, J.; Andreyev, A.; Bryman, D.; Kurchaninov, L.; Lu, P.; Muennich, A.; Retiere, F.; Sossi, V.
2012-03-01
The imaging performance of a high-resolution preclinical micro-positron emission tomography (micro-PET) system employing liquid xenon (LXe) as the gamma-ray detection medium was simulated. The arrangement comprises a ring of detectors consisting of trapezoidal LXe time projection ionization chambers and two arrays of large area avalanche photodiodes for the measurement of ionization charge and scintillation light. A key feature of the LXePET system is the ability to identify individual photon interactions with high energy resolution and high spatial resolution in three dimensions and determine the correct interaction sequence using Compton reconstruction algorithms. The simulated LXePET imaging performance was evaluated by computing the noise equivalent count rate, the sensitivity and point spread function for a point source according to the NEMA-NU4 standard. The image quality was studied with a micro-Derenzo phantom. Results of these simulation studies included noise equivalent count rate peaking at 1326 kcps at 188 MBq (705 kcps at 184 MBq) for an energy window of 450-600 keV and a coincidence window of 1 ns for mouse (rat) phantoms. The absolute sensitivity at the center of the field of view was 12.6%. Radial, tangential and axial resolutions of 22Na point sources reconstructed with a list-mode maximum likelihood expectation maximization algorithm were ⩽0.8 mm (full-width at half-maximum) throughout the field of view. Hot-rod inserts of <0.8 mm diameter were resolvable in the transaxial image of a micro-Derenzo phantom. The simulations show that a LXe system would provide new capabilities for significantly enhancing PET images.
A flexible, small positron emission tomography prototype for resource-limited laboratories
NASA Astrophysics Data System (ADS)
Miranda-Menchaca, A.; Martínez-Dávalos, A.; Murrieta-Rodríguez, T.; Alva-Sánchez, H.; Rodríguez-Villafuerte, M.
2015-05-01
Modern small-animal PET scanners typically consist of a large number of detectors along with complex electronics to provide tomographic images for research in the preclinical sciences that use animal models. These systems can be expensive, especially for resource-limited educational and academic institutions in developing countries. In this work we show that a small-animal PET scanner can be built with a relatively reduced budget while, at the same time, achieving relatively high performance. The prototype consists of four detector modules each composed of LYSO pixelated crystal arrays (individual crystal elements of dimensions 1 × 1 × 10 mm3) coupled to position-sensitive photomultiplier tubes. Tomographic images are obtained by rotating the subject to complete enough projections for image reconstruction. Image quality was evaluated for different reconstruction algorithms including filtered back-projection and iterative reconstruction with maximum likelihood-expectation maximization and maximum a posteriori methods. The system matrix was computed both with geometric considerations and by Monte Carlo simulations. Prior to image reconstruction, Fourier data rebinning was used to increase the number of lines of response used. The system was evaluated for energy resolution at 511 keV (best 18.2%), system sensitivity (0.24%), spatial resolution (best 0.87 mm), scatter fraction (4.8%) and noise equivalent count-rate. The system can be scaled-up to include up to 8 detector modules, increasing detection efficiency, and its price may be reduced as newer solid state detectors become available replacing the traditional photomultiplier tubes. Prototypes like this may prove to be very valuable for educational, training, preclinical and other biological research purposes.
NASA Astrophysics Data System (ADS)
Rudolph, Tobias; Ebert, Lars; Kowal, Jens
2006-03-01
Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.
Roguin, Ariel; Zviman, Menekhem M.; Meininger, Glenn R.; Rodrigues, E. Rene; Dickfeld, Timm M.; Bluemke, David A.; Lardo, Albert; Berger, Ronald D.; Calkins, Hugh; Halperin, Henry R.
2011-01-01
Background MRI has unparalleled soft-tissue imaging capabilities. The presence of devices such as pacemakers and implantable cardioverter/defibrillators (ICDs), however, is historically considered a contraindication to MRI. These devices are now smaller, with less magnetic material and improved electromagnetic interference protection. Our aim was to determine whether these modern systems can be used in an MR environment. Methods and Results We tested in vitro and in vivo lead heating, device function, force acting on the device, and image distortion at 1.5 T. Clinical MR protocols and in vivo measurements yielded temperature changes <0.5°C. Older (manufactured before 2000) ICDs were damaged by the MR scans. Newer ICD systems and most pacemakers, however, were not. The maximal force acting on newer devices was <100 g. Modern (manufactured after 2000) ICD systems were implanted in dogs (n=18), and after 4 weeks, 3- to 4-hour MR scans were performed (n=15). No device dysfunction occurred. The images were of high quality with distortion dependent on the scan sequence and plane. Pacing threshold and intracardiac electrogram amplitude were unchanged over the 8 weeks, except in 1 animal that, after MRI, had a transient (<12 hours) capture failure. Pathological data of the scanned animals revealed very limited necrosis or fibrosis at the tip of the lead area, which was not different from controls (n=3) not subjected to MRI. Conclusions These data suggest that certain modern pacemaker and ICD systems may indeed be MRI safe. This may have major clinical implications for current imaging practices. PMID:15277324
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J; Imbergamo, P
The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less
Using cover crops and cropping systems for nitrogen management
USDA-ARS?s Scientific Manuscript database
The reasons for using cover crops and optimized cropping sequences to manage nitrogen (N) are to maximize economic returns, improve soil quality and productivity, and minimize losses of N that might adversely impact environmental quality. Cover crops and cropping systems’ effects on N management are...
NASA Astrophysics Data System (ADS)
Wallhead, Ian; Ocaña, Roberto
2014-05-01
Laser projection devices should be designed to maximize their luminous efficacy and color gamut. This is for two main reasons. Firstly, being either stand alone devices or embedded in other products, they could be powered by battery, and lifetime is an important factor. Secondly, the increasing use of lasers to project images calls for a consideration of eye safety issues. The brightness of the projected image may be limited by the Class II accessible emission limit. There is reason to believe that current laser beam scanning projection technology is already close to the power ceiling based on eye safety limits. Consequently, it would be desirable to improve luminous efficacy to increase the output luminous flux whilst maintaining or improving color gamut for the same eye-safe optical power limit. Here we present a novel study about the combination of four laser wavelengths in order to maximize both color gamut and efficacy to produce the color white. Firstly, an analytic method to calculate efficacy as function of both four laser wavelengths and four laser powers is derived. Secondly we provide a new way to present the results by providing the diagram efficacy vs color gamut area that summarizes the performance of any wavelength combination for projection purposes. The results indicate that the maximal efficacy for the D65 white is only achievable by using a suitable combination of both laser power ratios and wavelengths.
The effect of image quality, repeated study, and assessment method on anatomy learning.
Fenesi, Barbara; Mackinnon, Chelsea; Cheng, Lucia; Kim, Joseph A; Wainman, Bruce C
2017-06-01
The use of two-dimensional (2D) images is consistently used to prepare anatomy students for handling real specimen. This study examined whether the quality of 2D images is a critical component in anatomy learning. The visual clarity and consistency of 2D anatomical images was systematically manipulated to produce low-quality and high-quality images of the human hand and human eye. On day 0, participants learned about each anatomical specimen from paper booklets using either low-quality or high-quality images, and then completed a comprehension test using either 2D images or three-dimensional (3D) cadaveric specimens. On day 1, participants relearned each booklet, and on day 2 participants completed a final comprehension test using either 2D images or 3D cadaveric specimens. The effect of image quality on learning varied according to anatomical content, with high-quality images having a greater effect on improving learning of hand anatomy than eye anatomy (high-quality vs. low-quality for hand anatomy P = 0.018; high-quality vs. low-quality for eye anatomy P = 0.247). Also, the benefit of high-quality images on hand anatomy learning was restricted to performance on short-answer (SA) questions immediately after learning (high-quality vs. low-quality on SA questions P = 0.018), but did not apply to performance on multiple-choice (MC) questions (high-quality vs. low-quality on MC questions P = 0.109) or after participants had an additional learning opportunity (24 hours later) with anatomy content (high vs. low on SA questions P = 0.643). This study underscores the limited impact of image quality on anatomy learning, and questions whether investment in enhancing image quality of learning aids significantly promotes knowledge development. Anat Sci Educ 10: 249-261. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Stantzou, Amalia; Ueberschlag-Pitiot, Vanessa; Thomasson, Remi; Furling, Denis; Bonnieu, Anne; Amthor, Helge; Ferry, Arnaud
2017-02-01
The effect of constitutive inactivation of the gene encoding myostatin on the gain in muscle performance during postnatal growth has not been well characterized. We analyzed 2 murine myostatin knockout (KO) models, (i) the Lee model (KO Lee ) and (ii) the Grobet model (KO Grobet ), and measured the contraction of tibialis anterior muscle in situ. Absolute maximal isometric force was increased in 6-month-old KO Lee and KO Grobet mice, as compared to wild-type mice. Similarly, absolute maximal power was increased in 6-month-old KO Lee mice. In contrast, specific maximal force (relative maximal force per unit of muscle mass was decreased in all 6-month-old male and female KO mice, except in 6-month-old female KO Grobet mice, whereas specific maximal power was reduced only in male KO Lee mice. Genetic inactivation of myostatin increases maximal force and power, but in return it reduces muscle quality, particularly in male mice. Muscle Nerve 55: 254-261, 2017. © 2016 Wiley Periodicals, Inc.
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Clinical image quality evaluation for panoramic radiography in Korean dental clinics
Choi, Bo-Ram; Choi, Da-Hye; Huh, Kyung-Hoe; Yi, Won-Jin; Heo, Min-Suk; Choi, Soon-Chul; Bae, Kwang-Hak
2012-01-01
Purpose The purpose of this study was to investigate the level of clinical image quality of panoramic radiographs and to analyze the parameters that influence the overall image quality. Materials and Methods Korean dental clinics were asked to provide three randomly selected panoramic radiographs. An oral and maxillofacial radiology specialist evaluated those images using our self-developed Clinical Image Quality Evaluation Chart. Three evaluators classified the overall image quality of the panoramic radiographs and evaluated the causes of imaging errors. Results A total of 297 panoramic radiographs were collected from 99 dental hospitals and clinics. The mean of the scores according to the Clinical Image Quality Evaluation Chart was 79.9. In the classification of the overall image quality, 17 images were deemed 'optimal for obtaining diagnostic information,' 153 were 'adequate for diagnosis,' 109 were 'poor but diagnosable,' and nine were 'unrecognizable and too poor for diagnosis'. The results of the analysis of the causes of the errors in all the images are as follows: 139 errors in the positioning, 135 in the processing, 50 from the radiographic unit, and 13 due to anatomic abnormality. Conclusion Panoramic radiographs taken at local dental clinics generally have a normal or higher-level image quality. Principal factors affecting image quality were positioning of the patient and image density, sharpness, and contrast. Therefore, when images are taken, the patient position should be adjusted with great care. Also, standardizing objective criteria of image density, sharpness, and contrast is required to evaluate image quality effectively. PMID:23071969
Newbury, Dale E; Ritchie, Nicholas W M
2011-01-01
The high throughput of the silicon drift detector energy dispersive X-ray spectrometer (SDD-EDS) enables X-ray spectrum imaging (XSI) in the scanning electron microscope to be performed in frame times of 10-100 s, the typical time needed to record a high-quality backscattered electron (BSE) image. These short-duration XSIs can reveal all elements, except H, He, and Li, present as major constituents, defined as 0.1 mass fraction (10 wt%) or higher, as well as minor constituents in the range 0.01-0.1 mass fraction, depending on the particular composition and possible interferences. Although BSEs have a greater abundance by a factor of 100 compared with characteristic X-rays, the strong compositional contrast in element-specific X-ray maps enables XSI mapping to compete with BSE imaging to reveal compositional features. Differences in the fraction of the interaction volume sampled by the BSE and X-ray signals lead to more delocalization of the X-ray signal at abrupt compositional boundaries, resulting in poorer spatial resolution. Improved resolution in X-ray elemental maps occurs for the case of a small feature composed of intermediate to high atomic number elements embedded in a matrix of lower atomic number elements. XSI imaging strongly complements BSE imaging, and the SDD-EDS technology enables an efficient combined BSE-XSI measurement strategy that maximizes the compositional information. If 10 s or more are available for the measurement of an area of interest, the analyst should always record the combined BSE-XSI information to gain the advantages of both measures of compositional contrast. Copyright © 2011 Wiley Periodicals, Inc.
Chang, Kevin J; Collins, Scott; Li, Baojun; Mayo-Smith, William W
2017-06-01
For assessment of the effect of varying the peak kilovoltage (kVp), the adaptive statistical iterative reconstruction technique (ASiR), and automatic dose modulation on radiation dose and image noise in a human cadaver, a cadaver torso underwent CT scanning at 80, 100, 120 and 140 kVp, each at ASiR settings of 0, 30 and 50 %, and noise indices (NIs) of 5.5, 11 and 22. The volume CT dose index (CTDI vol ), image noise, and attenuation values of liver and fat were analyzed for 20 data sets. Size-specific dose estimates (SSDEs) and liver-to-fat contrast-to-noise ratios (CNRs) were calculated. Values for different combinations of kVp, ASiR, and NI were compared. The CTDI vol varied by a power of 2 with kVp values between 80 and 140 without ASiR. Increasing ASiR levels allowed a larger decrease in CTDI vol and SSDE at higher kVp than at lower kVp while image noise was held constant. In addition, CTDI vol and SSDE decreased with increasing NI at each kVp, but the decrease was greater at higher kVp than at lower kVp. Image noise increased with decreasing kVp despite a fixed NI; however, this noise could be offset with the use of ASiR. The CT number of the liver remained unchanged whereas that of fat decreased as the kVp decreased. Image noise and dose vary in a complicated manner when the kVp, ASiR, and NI are varied in a human cadaver. Optimization of CT protocols will require balancing of the effects of each of these parameters to maximize image quality while minimizing dose.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
The study of surgical image quality evaluation system by subjective quality factor method
NASA Astrophysics Data System (ADS)
Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard
2016-03-01
GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.
Sadowski, Franklin G.; Covington, Steven J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.
Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John
2006-11-21
Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.
Mission-Related Execution and Planning Through Quality of Service Methods
2010-06-01
which maximizes a mission effectiveness functions is the ideal driver of QoS mechanisms. Service Quality Quality of Service may also exist in other...However, service quality is the originating concept of QoS and is the level of performance which one entity expects from another, including non-IT SoSs... Service quality may also be reflected in the context of a system’s purpose or an organization’s mission. Putting level of service values and
Quantitative intact specimen magnetic resonance microscopy at 3.0 T.
Bath, Kevin G; Voss, Henning U; Jing, Deqiang; Anderson, Stewart; Hempstead, Barbara; Lee, Francis S; Dyke, Jonathan P; Ballon, Douglas J
2009-06-01
In this report, we discuss the application of a methodology for high-contrast, high-resolution magnetic resonance microscopy (MRM) of murine tissue using a 3.0-T imaging system. We employed a threefold strategy that included customized specimen preparation to maximize image contrast, three-dimensional data acquisition to minimize scan time and custom radiofrequency resonator design to maximize signal sensitivity. Images had a resolution of 100 x 78 x 78 microm(3) with a signal-to-noise ratio per voxel greater than 25:1 and excellent contrast-to-noise ratios over a 30-min acquisition. We quantitatively validated the methods through comparisons of neuroanatomy across two lines of genetically engineered mice. Specifically, we were able to detect volumetric differences of as little as 9% between genetically engineered mouse strains in multiple brain regions that were predictive of underlying impairments in brain development. The overall methodology was straightforward to implement and provides ready access to basic MRM at field strengths that are widely available in both the laboratory and the clinic.
Nateghi, Ramin; Danyali, Habibollah; Helfroush, Mohammad Sadegh
2017-08-14
Based on the Nottingham criteria, the number of mitosis cells in histopathological slides is an important factor in diagnosis and grading of breast cancer. For manual grading of mitosis cells, histopathology slides of the tissue are examined by pathologists at 40× magnification for each patient. This task is very difficult and time-consuming even for experts. In this paper, a fully automated method is presented for accurate detection of mitosis cells in histopathology slide images. First a method based on maximum-likelihood is employed for segmentation and extraction of mitosis cell. Then a novel Maximized Inter-class Weighted Mean (MIWM) method is proposed that aims at reducing the number of extracted non-mitosis candidates that results in reducing the false positive mitosis detection rate. Finally, segmented candidates are classified into mitosis and non-mitosis classes by using a support vector machine (SVM) classifier. Experimental results demonstrate a significant improvement in accuracy of mitosis cells detection in different grades of breast cancer histopathological images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantari, F; Wang, J; Li, T
2015-06-15
Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less
Blind image quality assessment without training on human opinion scores
NASA Astrophysics Data System (ADS)
Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep
2013-03-01
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
Examining the Quality of Outdoor Play in Chinese Kindergartens
ERIC Educational Resources Information Center
Hu, Bi Ying; Li, Kejian; De Marco, Allison; Chen, Yuewen
2015-01-01
The benefits of outdoor play for children's well-rounded development are maximized when children experience enjoyment and, at the same time, gain physical, motor, cognitive, and social-emotional competence. This study examined the quality of outdoor play in Chinese kindergartens, the dominant form of full-day early childhood education program…
75 FR 28255 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-20
... Center). AHRQ is the lead agency charged with supporting research designed to improve the quality of... are designed to help these decision makers use research evidence to maximize the benefits of health... DEPARTMENT OF HEALTH AND HUMAN SERVICES Agency for Healthcare Research and Quality Agency...
75 FR 52347 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... Center). AHRQ is the lead agency charged with supporting research designed to improve the quality of... are designed to help these decision makers use research evidence to maximize the benefits of health... DEPARTMENT OF HEALTH AND HUMAN SERVICES Agency for Healthcare Research and Quality Agency...
75 FR 44796 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-29
... Center). AHRQ is the lead agency charged with supporting research designed to improve the quality of... are designed to help these decision makers use research evidence to maximize the benefits of health... DEPARTMENT OF HEALTH AND HUMAN SERVICES Agency for Healthcare Research and Quality Agency...
Ginning picker and stripper harvested high plains cotton - update
USDA-ARS?s Scientific Manuscript database
Texas High Plains cotton has improved over the last ten years with regard to yield and High Volume Instrument (HVI) fiber quality. Harvesting and ginning practices are needed which preserve fiber quality and maximize return to the producer. The objective of this work is to investigate the influence ...
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Kim, Taewoo
2017-03-01
This study examines the perceptual basis of diagnostic virtuosity in East Asian medicine, combining Merleau-Ponty's phenomenology and an ethnographic investigation of Korean medicine in South Korea. A novice, being exposed to numerous clinical transactions during apprenticeship, organizes perceptual experience that occurs between him or herself and patients. In the process, the fledgling practitioner's body begins to set up a medically-tinged "intentionality" interconnecting his or her consciousness and medically significant qualities in patients. Diagnostic virtuosity is gained when the practitioner embodies a cultivated medical intentionality. In the process of becoming a practitioner imbued with virtuosity, this study focuses on the East Asian notion of "Image" that maximizes the body's perceptual capacity, and minimizes possible reductions by linguistic re-presentation. "Image" enables the practitioner to somatically conceptualize the core notions of East Asian medicine, such as Yin-Yang, and to use them as an embodied litmus as the practitioner's cultivated body instinctively conjures up medical notions at clinical encounters. In line with anthropological critiques of reductionist frameworks that congeal human existential and perceptual vitality within a "scientific" explanatory model, this article attempts to provide an example of various knowing and caring practices, institutionalized external to the culture of science.
Micro*scope: a new internet resources for microbiology teaching
NASA Astrophysics Data System (ADS)
Patterson, D. J.; Sogin, M. L.
Micro-organisms are major players in all natural ecosystems, have dominated the Earth's biosphere for most of its existence, and have determined the character of the habitable planet. Yet a lack of adequate educational resources hinders the appreciation of microbial diversity and ecology. micro*scope is a new internet initiative which aims to provide resources to students and teachers. The site has five major domains. Classification: A comprehensive hierarchical classification of all prokaryotes and protists to the level of genus. The classification is used to navigate to further information. UbIO sofware new software for the management of names and classification schemes, allowing all known names for the same organisms to be mapped against each other so maximize the recovery of information. Images: about 3500 images are available, with high quality versions available to be downloaded. Outward internet links, the web site prompts the user to explore more authoritative or specialist sites to find further information on any species or taxon being visited. Educational resources, we include simple to use Lucid guides to help students and scientists identify micro-organisms are available through the internet. Other resources are also being assembled. The site is still under development.
Kato, W; Wong, M
1975-01-01
A system for improving the quality of cinefilm and for maintaining quality control is described. Objective criteria for contrast, resolution, and grain structure were established to measure the effects of varying X-ray dose, f-stop, development temperature, and selection of film and developer. We found that all variables nust be adjusted to maximize the viewing quality and that similar denisty curves can be achieved, independently of the choice of film and developer.
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
Park, Kay-Hyun; Chung, Suryeun; Kim, Dong Jung; Kim, Jun Sung; Lim, Cheong
2017-05-01
For a moderately dilated ascending aorta (diameter 35-54 mm), current guidelines recommend continuous annual or semi-annual examinations with computed tomography or magnetic resonance imaging. However, few data have shown the yield and benefit of such a protocol. This study aimed to investigate the fate of a moderately dilated ascending aorta and thereby determine the adequate imaging interval. In our institutional database, we identified adult patients having an ascending aortic diameter ≥40 mm in contrast-enhanced computed tomography and follow-up imaging(s) after ≥1 year. Of the 509 patients (mean age 67.2 ± 10.4 years) enrolled in the study, the maximal diameter of the ascending aorta was compared between the first and last images. Also, their medical records were reviewed to investigate the associated illness and clinical events. The mean growth rate of the patients with a 40-44 mm ( n = 321), 45-49 mm ( n = 142) and ≥50 mm ( n = 46) ascending aorta was 0.3 ± 0.5, 0.3 ± 0.5 and 0.7 ± 0.9 mm/year, respectively. During the mean interval of 4.3 ± 2.4 years, significant progression (diameter increase by ≥5 mm) occurred in 3.4, 5.6 and 21.7%, respectively. The 3- to 5-year rates of freedom from significant progression were 99.1%-96.5% (40-44 mm) and 97.8%-96.4% (45-49 mm). In multivariate analysis, initial ascending aortic diameter ≥45 mm and aortic valve regurgitation were significantly associated with significant progression. Acute type A aortic dissection occurred in 5 patients (1%), before the maximal diameter of the ascending aorta reached 55 mm or significant progression was observed. For a moderately dilated ascending aorta not exceeding 45 mm in maximal diameter and stable in the first annual follow-up image, a 3- to 4-year interval would be reasonable before subsequent imaging. More frequent imaging may be warranted in patients with aortic valve insufficiency or with an aortic diameter ≥45 mm. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Drobnitzky, Matthias; Klose, Uwe
2017-03-01
Magnetization-prepared rapid gradient-echo (MPRAGE) sequences are commonly employed for T1-weighted structural brain imaging. Following a contrast preparation radiofrequency (RF) pulse, the data acquisition proceeds under nonequilibrium conditions of the relaxing longitudinal magnetization. Variation of the flip angle can be used to maximize total available signal. Simulated annealing or greedy algorithms have so far been published to numerically solve this problem, with signal-to-noise ratios optimized for clinical imaging scenarios by adhering to a predefined shape of the signal evolution. We propose an unconstrained optimization of the MPRAGE experiment that employs techniques from resource allocation theory. A new dynamic programming solution is introduced that yields closed-form expressions for optimal flip angle variation. Flip angle series are proposed that maximize total transverse magnetization (Mxy) for a range of physiologic T1 values. A 3D MPRAGE sequence is modified to allow for a controlled variation of the excitation angle. Experiments employing a T1 contrast phantom are performed at 3T. 1D acquisitions without phase encoding permit measurement of the temporal development of Mxy. Image mean signal and standard deviation for reference flip angle trains are compared in 2D measurements. Signal profiles at sharp phantom edges are acquired to access image blurring related to nonuniform Mxy development. A novel closed-form expression for flip angle variation is found that constitutes the optimal policy to reach maximum total signal. It numerically equals previously published results of other authors when evaluated under their simplifying assumptions. Longitudinal magnetization (Mz) is exhaustively used without causing abrupt changes in the measured MR signal, which is a prerequisite for artifact free images. Phantom experiments at 3T verify the expected benefit for total accumulated k-space signal when compared with published flip angle series. Describing the MR signal collection in MPRAGE sequences as a Bellman problem is a new concept. By means of recursively solving a series of overlapping subproblems, this leads to an elegant solution for the problem of maximizing total available MR signal in k-space. A closed-form expression for flip angle variation avoids the complexity of numerical optimization and eases access to controlled variation in an attempt to identify potential clinical applications. © 2017 American Association of Physicists in Medicine.
Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J
2011-12-01
With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
Image aesthetic quality evaluation using convolution neural network embedded learning
NASA Astrophysics Data System (ADS)
Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng
2017-11-01
A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Digital radiography: optimization of image quality and dose using multi-frequency software.
Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D
2012-09-01
New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.
Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R
2013-07-01
The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-05
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia
2014-05-01
The recovery of materials from Demolition Waste (DW) represents one of the main target of the recycling industry and the its characterization is important in order to set up efficient sorting and/or quality control systems. End-Of-Life (EOL) concrete materials identification is necessary to maximize DW conversion into useful secondary raw materials, so it is fundamental to develop strategies for the implementation of an automatic recognition system of the recovered products. In this paper, HyperSpectral Imaging (HSI) technique was applied in order to detect DW composition. Hyperspectral images were acquired by a laboratory device equipped with a HSI sensing device working in the near infrared range (1000-1700 nm): NIR Spectral Camera™, embedding an ImSpector™ N17E (SPECIM Ltd, Finland). Acquired spectral data were analyzed adopting the PLS_Toolbox (Version 7.5, Eigenvector Research, Inc.) under Matlab® environment (Version 7.11.1, The Mathworks, Inc.), applying different chemometric methods: Principal Component Analysis (PCA) for exploratory data approach and Partial Least Square- Discriminant Analysis (PLS-DA) to build classification models. Results showed that it is possible to recognize DW materials, distinguishing recycled aggregates from contaminants (e.g. bricks, gypsum, plastics, wood, foam, etc.). The developed procedure is cheap, fast and non-destructive: it could be used to make some steps of the recycling process more efficient and less expensive.
[Diagnostic imaging of urolithiais. Current recommendations and new developments].
Thalgott, M; Kurtz, F; Gschwend, J E; Straub, M
2015-07-01
Prevalence of urolithiasis is increasing in industrialized countries--in both adults and children, representing a unique diagnostic and therapeutic challenge. Risk-adapted diagnostic imaging currently means assessment with maximized sensitivity and specificity together with minimal radiation exposure. In clinical routine, imaging is performed by sonography, unenhanced computed tomography (NCCT) or intravenous urography (IVU) as well as plain kidney-ureter-bladder (KUB) radiographs. The aim of the present review is a critical guideline-based and therapy-aligned presentation of diagnostic imaging procedures for optimized treatment of urolithiasis considering the specifics in children and pregnant women.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Reconstruction of shapes of near symmetric and asymmetric objects
Pizlo, Zygmunt; Sawada, Tadamasa; Li, Yunfeng
2013-03-26
A system processes 2D images of 2D or 3D objects, creating a model of the object that is consistent with the image and as veridical as the perception of the 2D image by humans. Vertices of the object that are hidden in the image are recovered by using planarity and symmetry constraints. The 3D shape is recovered by maximizing 3D compactness of the recovered object and minimizing its surface area. In some embodiments, these two criteria are weighted by using the geometric mean.
Focal-Plane Imaging of Crossed Beams in Nonlinear Optics Experiments
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel; Herring, G. C.
2007-01-01
An application of focal-plane imaging that can be used as a real time diagnostic of beam crossing in various optical techniques is reported. We discuss two specific versions and demonstrate the capability of maximizing system performance with an example in a combined dual-pump coherent anti-Stokes Raman scattering interferometric Rayleigh scattering experiment (CARS-IRS). We find that this imaging diagnostic significantly reduces beam alignment time and loss of CARS-IRS signals due to inadvertent misalignments.
Process perspective on image quality evaluation
NASA Astrophysics Data System (ADS)
Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte
2008-01-01
The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Ziessman, Harvey A; Majd, Massoud
2009-07-01
We reviewed our experience with (99m)technetium dimercapto-succinic acid scintigraphy obtained during an imaging pilot study for a multicenter investigation (Randomized Intervention for Children With Vesicoureteral Reflux) of the effectiveness of daily antimicrobial prophylaxis for preventing recurrent urinary tract infection and renal scarring. We analyzed imaging methodology and its relation to diagnostic image quality. (99m)Technetium dimercapto-succinic acid imaging guidelines were provided to participating sites. High-resolution planar imaging with parallel hole or pinhole collimation was required. Two core reviewers evaluated all submitted images. Analysis included appropriate views, presence or lack of patient motion, adequate magnification, sufficient counts and diagnostic image quality. Inter-reader agreement was evaluated. We evaluated 70, (99m)technetium dimercapto-succinic acid studies from 14 institutions. Variability was noted in methodology and image quality. Correlation (r value) between dose administered and patient age was 0.780. For parallel hole collimator imaging good correlation was noted between activity administered and counts (r = 0.800). For pinhole imaging the correlation was poor (r = 0.110). A total of 10 studies (17%) were rejected for quality issues of motion, kidney overlap, inadequate magnification, inadequate counts and poor quality images. The submitting institution was informed and provided with recommendations for improving quality, and resubmission of another study was required. Only 4 studies (6%) were judged differently by the 2 reviewers, and the differences were minor. Methodology and image quality for (99m)technetium dimercapto-succinic acid scintigraphy varied more than expected between institutions. The most common reason for poor image quality was inadequate count acquisition with insufficient attention to the tradeoff between administered dose, length of image acquisition, start time of imaging and resulting image quality. Inter-observer core reader agreement was high. The pilot study ensured good diagnostic quality standardized images for the Randomized Intervention for Children With Vesicoureteral Reflux investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOREN,NEALL E.
Wavefront curvature defocus effects occur in spotlight-mode SAR imagery when reconstructed via the well-known polar-formatting algorithm (PFA) under certain imaging scenarios. These include imaging at close range, using a very low radar center frequency, utilizing high resolution, and/or imaging very large scenes. Wavefront curvature effects arise from the unrealistic assumption of strictly planar wavefronts illuminating the imaged scene. This dissertation presents a method for the correction of wavefront curvature defocus effects under these scenarios, concentrating on the generalized: squint-mode imaging scenario and its computational aspects. This correction is accomplished through an efficient one-dimensional, image domain filter applied as a post-processingmore » step to PF.4. This post-filter, referred to as SVPF, is precalculated from a theoretical derivation of the wavefront curvature effect and varies as a function of scene location. Prior to SVPF, severe restrictions were placed on the imaged scene size in order to avoid defocus effects under these scenarios when using PFA. The SVPF algorithm eliminates the need for scene size restrictions when wavefront curvature effects are present, correcting for wavefront curvature in broadside as well as squinted collection modes while imposing little additional computational penalty for squinted images. This dissertation covers the theoretical development, implementation and analysis of the generalized, squint-mode SVPF algorithm (of which broadside-mode is a special case) and provides examples of its capabilities and limitations as well as offering guidelines for maximizing its computational efficiency. Tradeoffs between the PFA/SVPF combination and other spotlight-mode SAR image formation techniques are discussed with regard to computational burden, image quality, and imaging geometry constraints. It is demonstrated that other methods fail to exhibit a clear computational advantage over polar-formatting in conjunction with SVPF. This research concludes that PFA in conjunction with SVPF provides a computationally efficient spotlight-mode image formation solution that solves the wavefront curvature problem for most standoff distances and patch sizes, regardless of squint, resolution or radar center frequency. Additional advantages are that SVPF is not iterative and has no dependence on the visual contents of the scene: resulting in a deterministic computational complexity which typically adds only thirty percent to the overall image formation time.« less
NASA Astrophysics Data System (ADS)
Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.
2018-04-01
A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
Image quality scaling of electrophotographic prints
NASA Astrophysics Data System (ADS)
Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.
2003-12-01
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Maroules, Christopher D; Blaha, Michael J; El-Haddad, Mohamed A; Ferencik, Maros; Cury, Ricardo C
2013-01-01
Coronary CT angiography is an effective, evidence-based strategy for evaluating acute chest pain in the emergency department for patients at low-to-intermediate risk of acute coronary syndrome. Recent multicenter trials have reported that coronary CT angiography is safe, reduces time to diagnosis, facilitates discharge, and may lower overall cost compared with routine care. Herein, we provide a 10-step approach for establishing a successful coronary CT angiography program in the emergency department. The importance of strategic planning and multidisciplinary collaboration is emphasized. Patient selection and preparation guidelines for coronary CT angiography are reviewed with straightforward protocols that can be adapted and modified to clinical sites, depending on available cardiac imaging capabilities. Technical parameters and patient-specific modifications are also highlighted to maximize the likelihood of diagnostic quality examinations. Practical suggestions for quality control, process monitoring, and standardized reporting are reviewed. Finally, the role of a "triple rule-out" protocol is featured in the context of acute chest pain evaluation in the emergency department. Copyright © 2013 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Assessing product image quality for online shopping
NASA Astrophysics Data System (ADS)
Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq
2012-01-01
Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.
Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager
2014-03-01
quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS
Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David
2010-05-01
Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.
Kakinuma, Ryutaro; Ashizawa, Kazuto; Kuriyama, Keiko; Fukushima, Aya; Ishikawa, Hiroyuki; Kamiya, Hisashi; Koizumi, Naoya; Maruyama, Yuichiro; Minami, Kazunori; Nitta, Norihisa; Oda, Seitaro; Oshiro, Yasuji; Kusumoto, Masahiko; Murayama, Sadayuki; Murata, Kiyoshi; Muramatsu, Yukio; Moriyama, Noriyuki
2012-04-01
To evaluate interobserver agreement in regard to measurements of focal ground-glass opacities (GGO) diameters on computed tomography (CT) images to identify increases in the size of GGOs. Approval by the institutional review board and informed consent by the patients were obtained. Ten GGOs (mean size, 10.4 mm; range, 6.5-15 mm), one each in 10 patients (mean age, 65.9 years; range, 58-78 years), were used to make the diameter measurements. Eleven radiologists independently measured the diameters of the GGOs on a total of 40 thin-section CT images (the first [n = 10], the second [n = 10], and the third [n = 10] follow-up CT examinations and remeasurement of the first [n = 10] follow-up CT examinations) without comparing time-lapse CT images. Interobserver agreement was assessed by means of Bland-Altman plots. The smallest range of the 95% limits of interobserver agreement between the members of the 55 pairs of the 11 radiologists in regard to maximal diameter was -1.14 to 1.72 mm, and the largest range was -7.7 to 1.7 mm. The mean value of the lower limit of the 95% limits of agreement was -3.1 ± 1.4 mm, and the mean value of their upper limit was 2.5 ± 1.1 mm. When measurements are made by any two radiologists, an increase in the length of the maximal diameter of more than 1.72 mm would be necessary in order to be able to state that the maximal diameter of a particular GGO had actually increased. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Law, James; Morris, David E.; Izzi-Engbeaya, Chioma; Salem, Victoria; Coello, Christopher; Robinson, Lindsay; Jayasinghe, Maduka; Scott, Rebecca; Gunn, Roger; Rabiner, Eugenii; Tan, Tricia; Dhillo, Waljit S.; Bloom, Stephen; Budge, Helen
2018-01-01
Obesity and its metabolic consequences are a major cause of morbidity and mortality. Brown adipose tissue (BAT) utilizes glucose and free fatty acids to produce heat, thereby increasing energy expenditure. Effective evaluation of human BAT stimulators is constrained by the current standard method of assessing BAT—PET/CT—as it requires exposure to high doses of ionizing radiation. Infrared thermography (IRT) is a potential noninvasive, safe alternative, although direct corroboration with PET/CT has not been established. Methods: IRT and 18F-FDG PET/CT data from 8 healthy men subjected to water-jacket cooling were directly compared. Thermal images were geometrically transformed to overlay PET/CT-derived maximum intensity projection (MIP) images from each subject, and the areas with the most intense temperature and glucose uptake within the supraclavicular regions were compared. Relationships between supraclavicular temperatures (TSCR) from IRT and the metabolic rate of glucose uptake (MR(gluc)) from PET/CT were determined. Results: Glucose uptake on MR(gluc)MIP was found to correlate positively with a change in TSCR relative to a reference region (r2 = 0.721; P = 0.008). Spatial overlap between areas of maximal MR(gluc)MIP and maximal TSCR was 29.5% ± 5.1%. Prolonged cooling, for 60 min, was associated with a further TSCR rise, compared with cooling for 10 min. Conclusion: The supraclavicular hotspot identified on IRT closely corresponded to the area of maximal uptake on PET/CT-derived MR(gluc)MIP images. Greater increases in relative TSCR were associated with raised glucose uptake. IRT should now be considered a suitable method for measuring BAT activation, especially in populations for whom PET/CT is not feasible, practical, or repeatable. PMID:28912148
Locating an imaging radar in Canada for identifying spaceborne objects
NASA Astrophysics Data System (ADS)
Schick, William G.
1992-12-01
This research presents a study of the maximal coverage p-median facility location problem as applied to the location of an imaging radar in Canada for imaging spaceborne objects. The classical mathematical formulation of the maximal coverage p-median problem is converted into network-flow with side constraint formulations that are developed using a scaled down version of the imaging radar location problem. Two types of network-flow with side constraint formulations are developed: a network using side constraints that simulates the gains in a generalized network; and a network resembling a multi-commodity flow problem that uses side constraints to force flow along identical arcs. These small formulations are expanded to encompass a case study using 12 candidate radar sites, and 48 satellites divided into three states. SAS/OR PROC NETFLOW was used to solve the network-flow with side constraint formulations. The case study show that potential for both formulations, although the simulated gains formulation encountered singular matrix computational difficulties as a result of the very organized nature of its side constraint matrix. The multi-commodity flow formulation, when combined with equi-distribution of flow constraints, provided solutions for various values of p, the number of facilities to be selected.
Evaluation of a silicon photomultiplier PET insert for simultaneous PET and MR imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Guen Bae; Kim, Kyeong Yun; Yoon, Hyun Suk
2016-01-15
Purpose: In this study, the authors present a silicon photomultiplier (SiPM)-based positron emission tomography (PET) insert dedicated to small animal imaging with high system performance and robustness to temperature change. Methods: The insert consists of 64 LYSO-SiPM detector blocks arranged in 4 rings of 16 detector blocks to yield a ring diameter of 64 mm and axial field of view of 55 mm. Each detector block consists of a 9 × 9 array of LYSO crystals (1.2 × 1.2 × 10 mm{sup 3}) and a monolithic 4 × 4 SiPM array. The temperature of each monolithic SiPM is monitored, andmore » the proper bias voltage is applied according to the temperature reading in real time to maintain uniform performance. The performance of this PET insert was characterized using National Electrical Manufacturers Association NU 4-2008 standards, and its feasibility was evaluated through in vivo mouse imaging studies. Results: The PET insert had a peak sensitivity of 3.4% and volumetric spatial resolutions of 1.92 (filtered back projection) and 0.53 (ordered subset expectation maximization) mm{sup 3} at center. The peak noise equivalent count rate and scatter fraction were 42.4 kcps at 15.08 MBq and 16.5%, respectively. By applying the real-time bias voltage adjustment, an energy resolution of 14.2% ± 0.3% was maintained and the count rate varied ≤1.2%, despite severe temperature changes (10–30 °C). The mouse imaging studies demonstrate that this PET insert can produce high-quality images useful for imaging studies on the small animals. Conclusions: The developed MR-compatible PET insert is designed for insertion into a narrow-bore magnetic resonance imaging scanner, and it provides excellent imaging performance for PET/MR preclinical studies.« less
Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study
NASA Technical Reports Server (NTRS)
Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia
2015-01-01
Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.
Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M
2017-01-01
Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572
Ritter, Lutz; Mischkowski, Robert A; Neugebauer, Jörg; Dreiseidler, Timo; Scheer, Martin; Keeve, Erwin; Zöller, Joachim E
2009-09-01
The aim was to determine the influence of patient age, gender, body mass index (BMI), amount of dental restorations, and implants on image quality of cone-beam computerized tomography (CBCT). Fifty CBCT scans of a preretail version of Galileos (Sirona, Germany) were investigated retrospectively by 4 observers regarding image quality of 6 anatomic structures, pathologic findings detection, subjective exposure quality, and artifacts. Patient age, BMI, gender, amount of dental restorations, and implants were recorded and statistically tested for correlations to image quality. A negative effect on image quality was found statistically significantly correlated with age and the amount of dental restorations. None of the investigated image features were garbled by any of the investigated influence factors. Age and the amount of dental restorations appear to have a negative impact on CBCT image quality, whereas gender and BMI do not. Image quality of mental foramen, mandibular canal, and nasal floor are affected negatively by age but not by the amount of dental restorations. Further studies are required to elucidate influence factors on CBCT image quality.
Wood, T J; Beavis, A W; Saunderson, J R
2013-01-01
Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362
The University Professor As a Utility Maximizer and Producer of Learning, Research, and Income
ERIC Educational Resources Information Center
Becker, William E., Jr.
1975-01-01
A professonial decision-making model is presented for the purpose of exploring alternative plans to raise teaching quality. It is demonstrated that an increase in the pecuniary return to teaching will raise teaching quality while exogenous changes in teaching and/or research technology need not. (Author/EA)
Chapter 8: Acoustic Assessment of Wood Quality in Trees and Logs
Xiping Wang; Peter Carter
2015-01-01
Assessing the quality of raw wood materials has become a crucial issue in the operational value chain as forestry and the wood processing industry are increasingly under economic pressure to maximize extracted value. A significant effort has been devoted toward developing robust nondestructive evaluation (NDE) technologies capable of predicting the intrinsic wood...
ERIC Educational Resources Information Center
Shaheen, Amer N.
2011-01-01
This research investigated Electronic Service Quality (E-SQ) features that contribute to customer satisfaction in an online environment. The aim was to develop an approach which improves E-CRM processes and enhances online customer satisfaction. The research design adopted mixed methods involving qualitative and quantitative methods to…
ERIC Educational Resources Information Center
Wilhelm, Anne Garrison; Kim, Sungyeun
2015-01-01
One crucial question for researchers who study teachers' classroom practice is how to maximize information about what is happening in classrooms while minimizing costs. This report extends prior studies of the reliability of the Instructional Quality Assessment (IQA), a widely used classroom observation toolkit, and offers insight into the often…
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
On pictures and stuff: image quality and material appearance
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2014-02-01
Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.
Shaw, Leslee J; Blankstein, Ron; Jacobs, Jill E; Leipsic, Jonathon A; Kwong, Raymond Y; Taqueti, Viviany R; Beanlands, Rob S B; Mieres, Jennifer H; Flamm, Scott D; Gerber, Thomas C; Spertus, John; Di Carli, Marcelo F
2017-12-01
The aims of the current statement are to refine the definition of quality in cardiovascular imaging and to propose novel methodological approaches to inform the demonstration of quality in imaging in future clinical trials and registries. We propose defining quality in cardiovascular imaging using an analytical framework put forth by the Institute of Medicine whereby quality was defined as testing being safe, effective, patient-centered, timely, equitable, and efficient. The implications of each of these components of quality health care are as essential for cardiovascular imaging as they are for other areas within health care. Our proposed statement may serve as the foundation for integrating these quality indicators into establishing designations of quality laboratory practices and developing standards for value-based payment reform for imaging services. We also include recommendations for future clinical research to fulfill quality aims within cardiovascular imaging, including clinical hypotheses of improving patient outcomes, the importance of health status as an end point, and deferred testing options. Future research should evolve to define novel methods optimized for the role of cardiovascular imaging for detecting disease and guiding treatment and to demonstrate the role of cardiovascular imaging in facilitating healthcare quality. © 2017 American Heart Association, Inc.
ERIC Educational Resources Information Center
Nenty, H. J.; Adedoyin, O. O.; Odili, John N.; Major, T. E.
2007-01-01
More than any other of its aspects, assessment plays a central role in determining the quality of education. Quality of primary/basic education (QoE) can be viewed as the extent to which the process of education at the primary education level maximizes desirable outcomes in terms of cognitive, affective and psychomotor behaviour of the learners.…
2016-09-01
activity, and self-reported mobility, fatigue, activity restrictions, balance confidence, and satisfaction . Results to-date (n=6 of 24 from this study, n...gait quality, energy expenditure, and perceived function and satisfaction are assessed. Participants are then provided the other prosthesis and...walking activity, endurance, walking performance, gait quality, energy expenditure, and perceived function and satisfaction ) are compared between
The Ames MER microscopic imager toolkit
Sargent, R.; Deans, Matthew; Kunz, C.; Sims, M.; Herkenhoff, K.
2005-01-01
12The Mars Exploration Rovers, Spirit and Opportunity, have spent several successful months on Mars, returning gigabytes of images and spectral data to scientists on Earth. One of the instruments on the MER rovers, the Athena Microscopic Imager (MI), is a fixed focus, megapixel camera providing a ??3mm depth of field and a 31??31mm field of view at a working distance of 63 mm from the lens to the object being imaged. In order to maximize the science return from this instrument, we developed the Ames MI Toolkit and supported its use during the primary mission. The MI Toolkit is a set of programs that operate on collections of MI images, with the goal of making the data more understandable to the scientists on the ground. Because of the limited depth of field of the camera, and the often highly variable topography of the terrain being imaged, MI images of a given rock are often taken as a stack, with the Instrument Deployment Device (IDD) moving along a computed normal vector, pausing every few millimeters for the MI to acquire an image. The MI Toolkit provides image registration and focal section merging, which combine these images to form a single, maximally in-focus image, while compensating for changes in lighting as well as parallax due to the motion of the camera. The MI Toolkit also provides a 3-D reconstruction of the surface being imaged using stereo and can embed 2-D MI images as texture maps into 3-D meshes produced by other imagers on board the rover to provide context. The 2-D images and 3-D meshes output from the Toolkit are easily viewed by scientists using other mission tools, such as Viz or the MI Browser.This paper describes the MI Toolkit in detail, as well as our experience using it with scientists at JPL during the primary MER mission. ?? 2005 IEEE.
The Ames MER Microscopic Imager Toolkit
NASA Technical Reports Server (NTRS)
Sargent, Randy; Deans, Matthew; Kunz, Clayton; Sims, Michael; Herkenhoff, Ken
2005-01-01
The Mars Exploration Rovers, Spirit and Opportunity, have spent several successful months on Mars, returning gigabytes of images and spectral data to scientists on Earth. One of the instruments on the MER rovers, the Athena Microscopic Imager (MI), is a fixed focus, megapixel camera providing a plus or minus mm depth of field and a 3lx31mm field of view at a working distance of 63 mm from the lens to the object being imaged. In order to maximize the science return from this instrument, we developed the Ames MI Toolkit and supported its use during the primary mission. The MI Toolkit is a set of programs that operate on collections of MI images, with the goal of making the data more understandable to the scientists on the ground. Because of the limited depth of field of the camera, and the often highly variable topography of the terrain being imaged, MI images of a given rock are often taken as a stack, with the Instrument Deployment Device (IDD) moving along a computed normal vector, pausing every few millimeters for the MI to acquire an image. The MI Toolkit provides image registration and focal section merging, which combine these images to form a single, maximally in-focus image, while compensating for changes in lighting as well as parallax due to the motion of the camera. The MI Toolkit also provides a 3-D reconstruction of the surface being imaged using stereo and can embed 2-D MI images as texture maps into 3-D meshes produced by other imagers on board the rover to provide context. The 2-D images and 3-D meshes output from the Toolkit are easily viewed by scientists using other mission tools, such as Viz or the MI Browser. This paper describes the MI Toolkit in detail, as well as our experience using it with scientists at JPL during the primary MER mission.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Depletion of deep marine food patches forces divers to give up early.
Thums, Michele; Bradshaw, Corey J A; Sumner, Michael D; Horsburgh, Judy M; Hindell, Mark A
2013-01-01
Many optimal foraging models for diving animals examine strategies that maximize time spent in the foraging zone, assuming that prey acquisition increases linearly with search time. Other models have considered the effect of patch quality and predict a net energetic benefit if dives where no prey is encountered early in the dive are abandoned. For deep divers, however, the energetic benefit of giving up is reduced owing to the elevated energy costs associated with descending to physiologically hostile depths, so patch residence time should be invariant. Others consider an asymptotic gain function where the decision to leave a patch is driven by patch-depletion effects - the marginal value theorem. As predator behaviour is increasingly being used as an index of marine resource density and distribution, it is important to understand the nature of this gain function. We investigated the dive behaviour of the world's deepest-diving seal, the southern elephant seal Mirounga leonina, in response to patch quality. Testing these models has largely been limited to controlled experiments on captive animals. By integrating in situ measurements of the seal's relative lipid content obtained from drift rate data (a measure of foraging success) with area-restricted search behaviour identified from first-passage time analysis, we identified regions of high- and low-quality patches. Dive durations and bottom times were not invariant and did not increase in regions of high quality; rather, both were longer when patches were of relatively low quality. This is consistent with the predictions of the marginal value theorem and provides support for a nonlinear relationship between search time and prey acquisition. We also found higher descent and ascent rates in high-quality patches suggesting that seals minimized travel time to the foraging patch when quality was high; however, this was not achieved by increasing speed or dive angle. Relative body lipid content was an important predictor of dive behaviour. Seals did not schedule their diving to maximize time spent in the foraging zone in higher-quality patches, challenging the widely held view that maximizing time in the foraging zone translates to greater foraging success. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Temporal Tuning of Word- and Face-selective Cortex.
Yeatman, Jason D; Norcia, Anthony M
2016-11-01
Sensitivity to temporal change places fundamental limits on object processing in the visual system. An emerging consensus from the behavioral and neuroimaging literature suggests that temporal resolution differs substantially for stimuli of different complexity and for brain areas at different levels of the cortical hierarchy. Here, we used steady-state visually evoked potentials to directly measure three fundamental parameters that characterize the underlying neural response to text and face images: temporal resolution, peak temporal frequency, and response latency. We presented full-screen images of text or a human face, alternated with a scrambled image, at temporal frequencies between 1 and 12 Hz. These images elicited a robust response at the first harmonic that showed differential tuning, scalp topography, and delay for the text and face images. Face-selective responses were maximal at 4 Hz, but text-selective responses, by contrast, were maximal at 1 Hz. The topography of the text image response was strongly left-lateralized at higher stimulation rates, whereas the response to the face image was slightly right-lateralized but nearly bilateral at all frequencies. Both text and face images elicited steady-state activity at more than one apparent latency; we observed early (141-160 msec) and late (>250 msec) text- and face-selective responses. These differences in temporal tuning profiles are likely to reflect differences in the nature of the computations performed by word- and face-selective cortex. Despite the close proximity of word- and face-selective regions on the cortical surface, our measurements demonstrate substantial differences in the temporal dynamics of word- versus face-selective responses.
General form of a cooperative gradual maximal covering location problem
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh
2018-07-01
Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.
Lüpold, Stefan; Wistuba, Joachim; Damm, Oliver S; Rivers, James W; Birkhead, Tim R
2011-05-01
The outcome of sperm competition (i.e. competition for fertilization between ejaculates from different males) is primarily determined by the relative number and quality of rival sperm. Therefore, the testes are under strong selection to maximize both sperm number and quality, which are likely to result in trade-offs in the process of spermatogenesis (e.g. between the rate of spermatogenesis and sperm length or sperm energetics). Comparative studies have shown positive associations between the level of sperm competition and both relative testis size and the proportion of seminiferous (sperm-producing) tissue within the testes. However, it is unknown how the seminiferous tissue itself or the process of spermatogenesis might evolve in response to sperm competition. Therefore, we quantified the different germ cell types and Sertoli cells (SC) in testes to assess the efficiency of sperm production and its associations with sperm length and mating system across 10 species of New World Blackbirds (Icteridae) that show marked variation in sperm length and sperm competition level. We found that species under strong sperm competition generate more round spermatids (RS)/spermatogonium and have SC that support a greater number of germ cells, both of which are likely to increase the maximum sperm output. However, fewer of the RS appeared to elongate to mature spermatozoa in these species, which might be the result of selection for discarding spermatids with undesirable characteristics as they develop. Our results suggest that, in addition to overall size and gross morphology, testes have also evolved functional adaptations to maximize sperm quantity and quality.
Guidance for Efficient Small Animal Imaging Quality Control.
Osborne, Dustin R; Kuntner, Claudia; Berr, Stuart; Stout, David
2017-08-01
Routine quality control is a critical aspect of properly maintaining high-performance small animal imaging instrumentation. A robust quality control program helps produce more reliable data both for academic purposes and as proof of system performance for contract imaging work. For preclinical imaging laboratories, the combination of costs and available resources often limits their ability to produce efficient and effective quality control programs. This work presents a series of simplified quality control procedures that are accessible to a wide range of preclinical imaging laboratories. Our intent is to provide minimum guidelines for routine quality control that can assist preclinical imaging specialists in setting up an appropriate quality control program for their facility.
Feng, Sheng; Lotz, Thomas; Chase, J Geoffrey; Hann, Christopher E
2010-01-01
Digital Image Elasto Tomography (DIET) is a non-invasive elastographic breast cancer screening technology, based on image-based measurement of surface vibrations induced on a breast by mechanical actuation. Knowledge of frequency response characteristics of a breast prior to imaging is critical to maximize the imaging signal and diagnostic capability of the system. A feasibility analysis for a non-invasive image based modal analysis system is presented that is able to robustly and rapidly identify resonant frequencies in soft tissue. Three images per oscillation cycle are enough to capture the behavior at a given frequency. Thus, a sweep over critical frequency ranges can be performed prior to imaging to determine critical imaging settings of the DIET system to optimize its tumor detection performance.
Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging
2016-12-15
S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN
The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device
1994-02-01
simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an
Analyzing Sub-Classifications of Glaucoma via SOM Based Clustering of Optic Nerve Images.
Yan, Sanjun; Abidi, Syed Sibte Raza; Artes, Paul Habib
2005-01-01
We present a data mining framework to cluster optic nerve images obtained by Confocal Scanning Laser Tomography (CSLT) in normal subjects and patients with glaucoma. We use self-organizing maps and expectation maximization methods to partition the data into clusters that provide insights into potential sub-classification of glaucoma based on morphological features. We conclude that our approach provides a first step towards a better understanding of morphological features in optic nerve images obtained from glaucoma patients and healthy controls.
Task-based measures of image quality and their relation to radiation dose and patient risk
Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.
2015-01-01
The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Rodríguez-Olivares, Ramón; El Faquir, Nahid; Rahhab, Zouhair; Maugenest, Anne-Marie; Van Mieghem, Nicolas M; Schultz, Carl; Lauritsch, Guenter; de Jaegere, Peter P T
2016-07-01
To study the determinants of image quality of rotational angiography using dedicated research prototype software for motion compensation without rapid ventricular pacing after the implantation of four commercially available catheter-based valves. Prospective observational study including 179 consecutive patients who underwent transcatheter aortic valve implantation (TAVI) with either the Medtronic CoreValve (MCS), Edward-SAPIEN Valve (ESV), Boston Sadra Lotus (BSL) or Saint-Jude Portico Valve (SJP) in whom rotational angiography (R-angio) with motion compensation 3D image reconstruction was performed. Image quality was evaluated from grade 1 (excellent image quality) to grade 5 (strongly degraded). Distinction was made between good (grades 1, 2) and poor image quality (grades 3-5). Clinical (gender, body mass index, Agatston score, heart rate and rhythm, artifacts), procedural (valve type) and technical variables (isocentricity) were related with the image quality assessment. Image quality was good in 128 (72 %) and poor in 51 (28 %) patients. By univariable analysis only valve type (BSL) and the presence of an artefact negatively affected image quality. By multivariate analysis (in which BMI was forced into the model) BSL valve (Odds 3.5, 95 % CI [1.3-9.6], p = 0.02), presence of an artifact (Odds 2.5, 95 % CI [1.2-5.4], p = 0.02) and BMI (Odds 1.1, 95 % CI [1.0-1.2], p = 0.04) were independent predictors of poor image quality. Rotational angiography with motion compensation 3D image reconstruction using a dedicated research prototype software offers good image quality for the evaluation of frame geometry after TAVI in the majority of patients. Valve type, presence of artifacts and higher BMI negatively affect image quality.
Fuzzy control system for a remote focusing microscope
NASA Astrophysics Data System (ADS)
Weiss, Jonathan J.; Tran, Luc P.
1992-01-01
Space Station Crew Health Care System procedures require the use of an on-board microscope whose slide images will be transmitted for analysis by ground-based microbiologists. Focusing of microscope slides is low on the list of crew priorities, so NASA is investigating the option of telerobotic focusing controlled by the microbiologist on the ground, using continuous video feedback. However, even at Space Station distances, the transmission time lag may disrupt the focusing process, severely limiting the number of slides that can be analyzed within a given bandwidth allocation. Substantial time could be saved if on-board automation could pre-focus each slide before transmission. The authors demonstrate the feasibility of on-board automatic focusing using a fuzzy logic ruled-based system to bring the slide image into focus. The original prototype system was produced in under two months and at low cost. Slide images are captured by a video camera, then digitized by gray-scale value. A software function calculates an index of 'sharpness' based on gray-scale contrasts. The fuzzy logic rule-based system uses feedback to set the microscope's focusing control in an attempt to maximize sharpness. The systems as currently implemented performs satisfactorily in focusing a variety of slide types at magnification levels ranging from 10 to 1000x. Although feasibility has been demonstrated, the system's performance and usability could be improved substantially in four ways: by upgrading the quality and resolution of the video imaging system (including the use of full color); by empirically defining and calibrating the index of image sharpness; by letting the overall focusing strategy vary depending on user-specified parameters; and by fine-tuning the fuzzy rules, set definitions, and procedures used.
Yunlong, Bai; Hao, Huang; Kai, Yang; Hong, Tang
2014-10-01
To investigate in situ visualization using near-infrared quantum dots (QDs) conjugated with arginine- glycine-aspartic acid (ROD) peptide fluorescent probes in oral squamous cell carcinoma (08CC). QDs with emission wavelength of 800 nm (QD800) were conjugated with RGD peptides to produce QD800-RGD fluorescent probes. Human OSCC cell line BcaCD885 was inoculated in nude mice cheeks to establish OSCC mouse models. Frozen BcaCD885 tumor slices were immunofluorescence double stained by using QD800-RGD and CD105 monoclonal antibody and were observed using a laser scanning confocal microscope. QD800-RGD was injected into the OSCC models through the tail veins, and the in situ visualization was analyzed at different time points. The mice were sacrificed 12 h after injection to isolate tumors for the ex vivo analysis of probe localization in the tumors. QD800-RGD specifically targeted the integrin avβ3 expressed in the endothelial cells of tumor angiogenic vessels in vitro and in vivo, producing clear tumor fluorescence images after intravenous injection. The most complete tumor images with maximal signal-to-noise ratios were observed 0.5 h to 6 h after injection of the probe and significantly reduced 9 h after the injection. However, the tumor image was still clearly visible at 12 h. Using intravenously injected QD800-RGD generates high quality OSCC images when integrin avβ3, which is expressed in the endothelial cells of tumor angiogenic vessels, is used as the target. The technique offers great potential in the diagnosis and individual treatment of OSCC.
Digital processing of radiographic images from PACS to publishing.
Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R
2001-03-01
Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Low-cost oblique illumination: an image quality assessment.
Ruiz-Santaquiteria, Jesus; Espinosa-Aranda, Jose Luis; Deniz, Oscar; Sanchez, Carlos; Borrego-Ramos, Maria; Blanco, Saul; Cristobal, Gabriel; Bueno, Gloria
2018-01-01
We study the effectiveness of several low-cost oblique illumination filters to improve overall image quality, in comparison with standard bright field imaging. For this purpose, a dataset composed of 3360 diatom images belonging to 21 taxa was acquired. Subjective and objective image quality assessments were done. The subjective evaluation was performed by a group of diatom experts by psychophysical test where resolution, focus, and contrast were assessed. Moreover, some objective nonreference image quality metrics were applied to the same image dataset to complete the study, together with the calculation of several texture features to analyze the effect of these filters in terms of textural properties. Both image quality evaluation methods, subjective and objective, showed better results for images acquired using these illumination filters in comparison with the no filtered image. These promising results confirm that this kind of illumination filters can be a practical way to improve the image quality, thanks to the simple and low cost of the design and manufacturing process. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Lindholm, E; Brevinge, H; Bergh, C H; Körner, U; Lundholm, K
2003-08-01
The purpose of this study was to evaluate to what extent self-reported health related quality of life (HRQL), assessed by the Swedish standard version of the Medical Outcome Study Short-Form 36 (SF-36), is related to measured exercise capacity and metabolic efficiency in a cohort of healthy subjects from the Gothenburg area of Sweden. Individuals were invited to take part in the evaluation where HRQL was compared with the maximal power output expressed in Watts assessed during a standardized treadmill test with incremental work loads. Whole body respiratory gas exchanges (CO2/O2) were simultaneously measured. Estimate of metabolic efficiency was derived from oxygen uptake per Watt produced (ml O2/min/W) near maximal work. The health status profile in the current population largely agreed with normative data from an age- and gender-matched reference group, although some measured scores were slightly better than reference scores. Males and females had a similar relationship between energy cost (ml O2/min) for production of maximal work (W), while the regressions for maximal exercise power and age were significantly different between males and females (p < 0.01). The overall metabolic efficiency was the same in individuals between 40 and 74 years of age (10.4 +/- 0.07 ml O2/min/ Watt). Maximal exercise power was only related to the SF-36 subscale physical functioning (PF), but unrelated to other physical subscales such as role limitations due to physical problems, good general health and vitality. There was also a discrepancy between measured maximal power and PF in many subjects, particularly in males who experienced either intact or severely reduced PF. Our results demonstrate that simultaneous measurements of self-reported and objective measures of PF should add a more integrated view for evaluation of therapeutic effectiveness, since the overall correlation was poor between objective and subjective scores among individuals.
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
remaining 85% of the appropriation to maximize total air pollution reduction and health benefits, improve air quality in areas disproportionately affected by air pollution, leverage additional matching funds
Reconstructing liver shape and position from MR image slices using an active shape model
NASA Astrophysics Data System (ADS)
Fenchel, Matthias; Thesen, Stefan; Schilling, Andreas
2008-03-01
We present an algorithm for fully automatic reconstruction of 3D position, orientation and shape of the human liver from a sparsely covering set of n 2D MR slice images. Reconstructing the shape of an organ from slice images can be used for scan planning, for surgical planning or other purposes where 3D anatomical knowledge has to be inferred from sparse slices. The algorithm is based on adapting an active shape model of the liver surface to a given set of slice images. The active shape model is created from a training set of liver segmentations from a group of volunteers. The training set is set up with semi-manual segmentations of T1-weighted volumetric MR images. Searching for the optimal shape model that best fits to the image data is done by maximizing a similarity measure based on local appearance at the surface. Two different algorithms for the active shape model search are proposed and compared: both algorithms seek to maximize the a-posteriori probability of the grey level appearance around the surface while constraining the surface to the space of valid shapes. The first algorithm works by using grey value profile statistics in normal direction. The second algorithm uses average and variance images to calculate the local surface appearance on the fly. Both algorithms are validated by fitting the active shape model to abdominal 2D slice images and comparing the shapes, which have been reconstructed, to the manual segmentations and to the results of active shape model searches from 3D image data. The results turn out to be promising and competitive to active shape model segmentations from 3D data.
Research on assessment and improvement method of remote sensing image reconstruction
NASA Astrophysics Data System (ADS)
Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping
2018-01-01
Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
Earth Observations taken by Expedition 34 crewmember
2013-01-05
ISS034-E-024622 (5 Jan. 2013) --- Polar mesospheric clouds over the South Pacific Ocean are featured in this image photographed by an Expedition 34 crew member on the International Space Station. Polar mesospheric clouds—also known as noctilucent, or “night shining” clouds—are formed 76 to 85 kilometers above Earth’s surface near the mesosphere-thermosphere boundary of the atmosphere, a region known as the mesopause. At these altitudes, water vapor can freeze into clouds of ice crystals. When the sun is below the horizon such that the ground is in darkness, these high clouds may still be illuminated—lending them their ethereal, “night shining” qualities. Noctilucent clouds have been observed from all human vantage points in both the Northern and Southern Hemispheres – from the surface, in aircraft, and in orbit from the space station—and tend to be most visible during the late spring and early summer seasons. Polar mesospheric clouds also are of interest to scientists studying the atmosphere. While some scientists seek to understand their mechanisms of formation, others have identified them as potential indicators of atmospheric changes resulting from increases in greenhouse gas concentrations. This photograph was taken when the station was over the Pacific Ocean south of French Polynesia. While most polar mesospheric cloud images are taken from the orbital complex with relatively short focal length lens to maximize the field of view, this image was taken with a long lens (400 mm) allowing for additional detail of the cloud forms to be seen. Below the brightly-lit noctilucent clouds in the center of the image, the pale orange band indicates the stratosphere.
Thomas, Christoph; Brodoefel, Harald; Tsiflikas, Ilias; Bruckner, Friederike; Reimann, Anja; Ketelsen, Dominik; Drosch, Tanja; Claussen, Claus D; Kopp, Andreas; Heuschmid, Martin; Burgstahler, Christof
2010-02-01
To prospectively evaluate the influence of the clinical pretest probability assessed by the Morise score onto image quality and diagnostic accuracy in coronary dual-source computed tomography angiography (DSCTA). In 61 patients, DSCTA and invasive coronary angiography were performed. Subjective image quality and accuracy for stenosis detection (>50%) of DSCTA with invasive coronary angiography as gold standard were evaluated. The influence of pretest probability onto image quality and accuracy was assessed by logistic regression and chi-square testing. Correlations of image quality and accuracy with the Morise score were determined using linear regression. Thirty-eight patients were categorized into the high, 21 into the intermediate, and 2 into the low probability group. Accuracies for the detection of significant stenoses were 0.94, 0.97, and 1.00, respectively. Logistic regressions and chi-square tests showed statistically significant correlations between Morise score and image quality (P < .0001 and P < .001) and accuracy (P = .0049 and P = .027). Linear regression revealed a cutoff Morise score for a good image quality of 16 and a cutoff for a barely diagnostic image quality beyond the upper Morise scale. Pretest probability is a weak predictor of image quality and diagnostic accuracy in coronary DSCTA. A sufficient image quality for diagnostic images can be reached with all pretest probabilities. Therefore, coronary DSCTA might be suitable also for patients with a high pretest probability. Copyright 2010 AUR. Published by Elsevier Inc. All rights reserved.
Modified-BRISQUE as no reference image quality assessment for structural MR images.
Chow, Li Sze; Rajagopal, Heshalini
2017-11-01
An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.
RAMTaB: Robust Alignment of Multi-Tag Bioimages
Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.
2012-01-01
Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
2013-12-30
E7(/(3+21(180%(5 ,QFOXGHDUHDFRGH 18-02-2014 Final Mar 2012 - Jan 2014 Quality of cardiopulmonary resuscitation when directing the area of...1. Protocol Number: FWH20110158A 2. Type of Research: Animal Research 3. Title: Quality of cardiopulmonary resuscitation when directing...Compressions over the Left Ventricle During Cardiopulmonary Resuscitation Increases Coronary Perfusion Pressure and Return of Spontaneous Circulation
Advanced imaging programs: maximizing a multislice CT investment.
Falk, Robert
2008-01-01
Advanced image processing has moved from a luxury to a necessity in the practice of medicine. A hospital's adoption of sophisticated 3D imaging entails several important steps with many factors to consider in order to be successful. Like any new hospital program, 3D post-processing should be introduced through a strategic planning process that includes administrators, physicians, and technologists to design, implement, and market a program that is scalable-one that minimizes up front costs while providing top level service. This article outlines the steps for planning, implementation, and growth of an advanced imaging program.
Findlay, Scott David; Huang, Rong; Ishikawa, Ryo; Shibata, Naoya; Ikuhara, Yuichi
2017-02-08
Annular bright field (ABF) scanning transmission electron microscopy has proven able to directly image lithium columns within crystalline environments, offering much insight into the structure and properties of lithium-ion battery materials. We summarize the image formation mechanisms underpinning ABF imaging, review the experimental application of this technique to imaging lithium in materials and overview the conditions that help maximize the visibility of lithium columns. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Acoustic assessment of wood quality of raw forest materials : a path to increased profitability
Xiping Wang; Peter Carter; Robert J. Ross; Brian K. Brashaw
2007-01-01
Assessment of the quality of raw wood materials has become a crucial issue in the operational value chain as forestry and the wood processing industry are increasingly under economic pressure to maximize extracted value. A significant effort has been devoted toward developing robust nondestructive evaluation (NDE) technologies capable of predicting the intrinsic wood...
Toward High Quality Family Day Care for Infants and Toddlers. Final Report.
ERIC Educational Resources Information Center
Rauch, Marian D.; Crowell, Doris C.
Reported were the results of a project which established a cluster of family day care homes in Hawaii in which caregivers were selected, trained, and provided with supportive services and salaries. The primary objective of the program was to provide a replicable, high quality program for preschool children that would maximize social, emotional,…
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
In vivo maximal fascicle-shortening velocity during plantar flexion in humans.
Hauraix, Hugo; Nordez, Antoine; Guilhem, Gaël; Rabita, Giuseppe; Dorel, Sylvain
2015-12-01
Interindividual variability in performance of fast movements is commonly explained by a difference in maximal muscle-shortening velocity due to differences in the proportion of fast-twitch fibers. To provide a better understanding of the capacity to generate fast motion, this study aimed to 1) measure for the first time in vivo the maximal fascicle-shortening velocity of human muscle; 2) evaluate the relationship between angular velocity and fascicle-shortening velocity from low to maximal angular velocities; and 3) investigate the influence of musculo-articular features (moment arm, tendinous tissues stiffness, and muscle architecture) on maximal angular velocity. Ultrafast ultrasound images of the gastrocnemius medialis were obtained from 31 participants during maximal isokinetic and light-loaded plantar flexions. A strong linear relationship between fascicle-shortening velocity and angular velocity was reported for all subjects (mean R(2) = 0.97). The maximal shortening velocity (V(Fmax)) obtained during the no-load condition (NLc) ranged between 18.8 and 43.3 cm/s. V(Fmax) values were very close to those of the maximal shortening velocity (V(max)), which was extrapolated from the F-V curve (the Hill model). Angular velocity reached during the NLc was significantly correlated with this V(Fmax) (r = 0.57; P < 0.001). This finding was in agreement with assumptions about the role of muscle fiber type, whereas interindividual comparisons clearly support the fact that other parameters may also contribute to performance during fast movements. Nevertheless, none of the biomechanical features considered in the present study were found to be directly related to the highest angular velocity, highlighting the complexity of the upstream mechanics that lead to maximal-velocity muscle contraction. Copyright © 2015 the American Physiological Society.
Peteiro, Jesús; Bouzas-Mosquera, Alberto; Estevez, Rodrigo; Pazos, Pablo; Piñeiro, Miriam; Castro-Beiras, Alfonso
2012-03-01
Supine bicycle exercise (SBE) echocardiography and treadmill exercise (TME) echocardiography have been used for evaluation of coronary artery disease (CAD). Although peak imaging acquisition has been considered unfeasible with TME, higher sensitivity for the detection of CAD has been recently found with this method compared with post-TME echocardiography. However, peak TME echocardiography has not been previously compared with the more standardized peak SBE echocardiography. The aim of this study was to compare peak TME echocardiography, peak SBE echocardiography, and post-TME echocardiography for the detection of CAD. A series of 116 patients (mean age, 61 ± 10 years) referred for evaluation of CAD underwent SBE (starting at 25 W, with 25-W increments every 2-3 min) and TME with peak and postexercise imaging acquisition, in a random sequence. Digitized images at baseline, at peak TME, after TME, and at peak SBE were interpreted in a random and blinded fashion. All patients underwent coronary angiography. Maximal heart rate was higher during TME, whereas systolic blood pressure was higher during SBE, resulting in similar rate-pressure products. On quantitative angiography, 75 patients had coronary stenosis (≥50%). In these patients, wall motion score indexes at maximal exercise were higher at peak TME (median, 1.45; interquartile range [IQR], 1.13-1.75) than at peak SBE (median, 1.25; IQR, 1.0-1.56) or after TME (median, 1.13; IQR, 1.0-1.38) (P = .002 between peak TME and peak SBE imaging, P < .001 between post-TME imaging and the other modalities). The extent of myocardial ischemia (number of ischemic segments) was also higher during peak TME (median, 5; IQR, 2-12) compared with peak SBE (median, 3; IQR, 0-8) or after TME (median, 2; IQR, 0-4) (P < .001 between peak TME and peak SBE imaging, P < .001 between post-TME imaging and the other modalities). ST-segment changes in patients with CAD and normal baseline ST segments were higher during TME (median, 1 mm [IQR, 0-1.9 mm] vs 0 mm [IQR, 0-1.5 mm]; P = .006). The sensitivity of peak TME, peak SBE, and post-TME echocardiography for CAD was 84%, 75%, and 60% (P = .001 between post-TME and peak TME echocardiography, P = .055 between post-TME and peak SBE echocardiography), with specificity of 63%, 80%, and 78%, respectively (P = NS) and accuracy of 77%, 77%, and 66%, respectively (P = NS). Peak TME echocardiography diagnosed multivessel disease in 27 of the 40 patients with stenoses in more than one coronary artery, in contrast to 17 patients with peak SBE imaging and 12 with post-TME imaging (P < .05 between peak TME imaging and the other modalities). Image quality was similar with the three techniques. The duration of the test was longer with SBE echocardiography (9.5 ± 3.8 vs 7.6 ± 2.5 min, P < .001). During TME and SBE, patients achieve similar double products. Ischemia is more extensive and frequent with peak TME, which makes peak TME a more valuable exercise echocardiographic modality to increase sensitivity. However, peak SBE should be preferred to TME if the latter is performed with postexercise imaging acquisition. Copyright © 2012 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Podkowinski, Dominika; Sharian Varnousfaderani, Ehsan; Simader, Christian; Bogunovic, Hrvoje; Philip, Ana-Maria; Gerendas, Bianca S.
2017-01-01
Background and Objective To determine optimal image averaging settings for Spectralis optical coherence tomography (OCT) in patients with and without cataract. Study Design/Material and Methods In a prospective study, the eyes were imaged before and after cataract surgery using seven different image averaging settings. Image quality was quantitatively evaluated using signal-to-noise ratio, distinction between retinal layer image intensity distributions, and retinal layer segmentation performance. Measures were compared pre- and postoperatively across different degrees of averaging. Results 13 eyes of 13 patients were included and 1092 layer boundaries analyzed. Preoperatively, increasing image averaging led to a logarithmic growth in all image quality measures up to 96 frames. Postoperatively, increasing averaging beyond 16 images resulted in a plateau without further benefits to image quality. Averaging 16 frames postoperatively provided comparable image quality to 96 frames preoperatively. Conclusion In patients with clear media, averaging 16 images provided optimal signal quality. A further increase in averaging was only beneficial in the eyes with senile cataract. However, prolonged acquisition time and possible loss of details have to be taken into account. PMID:28630764
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2009-01-01
For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994
Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk
2009-09-01
For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
7 CFR 3405.7 - Joint project proposals.
Code of Federal Regulations, 2014 CFR
2014-01-01
... agricultural sciences. The goals of such joint initiatives should include maximizing the use of limited...), increasing cost-effectiveness through achieving economies of scale, strengthening the scope and quality of a...
7 CFR 3405.7 - Joint project proposals.
Code of Federal Regulations, 2013 CFR
2013-01-01
... agricultural sciences. The goals of such joint initiatives should include maximizing the use of limited...), increasing cost-effectiveness through achieving economies of scale, strengthening the scope and quality of a...
7 CFR 3405.7 - Joint project proposals.
Code of Federal Regulations, 2012 CFR
2012-01-01
... agricultural sciences. The goals of such joint initiatives should include maximizing the use of limited...), increasing cost-effectiveness through achieving economies of scale, strengthening the scope and quality of a...
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Alonso-Caneiro, David; Sampson, Danuta M.; Chew, Avenell L.; Collins, Michael J.; Chen, Fred K.
2018-01-01
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading. PMID:29552404
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Alonso-Caneiro, David; Sampson, Danuta M; Chew, Avenell L; Collins, Michael J; Chen, Fred K
2018-02-01
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading.
Comprehensive model for predicting perceptual image quality of smart mobile devices.
Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng
2015-01-01
An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.
Image Quality Performance Measurement of the microPET Focus 120
NASA Astrophysics Data System (ADS)
Ballado, Fernando Trejo; López, Nayelli Ortega; Flores, Rafael Ojeda; Ávila-Rodríguez, Miguel A.
2010-12-01
The aim of this work is to evaluate the characteristics involved in the image reconstruction of the microPET Focus 120. For this evaluation were used two different phantoms; a miniature hot-rod Derenzo phantom and a National Electrical Manufacturers Association (NEMA) NU4-2008 image quality (IQ) phantom. The best image quality was obtained when using OSEM3D as the reconstruction method reaching a spatial resolution of 1.5 mm with the Derenzo phantom filled with 18F. Image quality test results indicate a superior image quality for the Focus 120 when compared to previous microPET models.
NASA Astrophysics Data System (ADS)
Wang, J.; Qu, M.; Leng, S.; McCollough, C. H.
2010-04-01
In this study, the feasibility of differentiating uric acid from non-uric acid kidney stones in the presence of iodinated contrast material was evaluated using dual-energy CT (DECT). Iodine subtraction was accomplished with a commercial three material decomposition algorithm to create a virtual non-contrast (VNC) image set. VNC images were then used to segment stone regions from tissue background. The DE ratio of each stone was calculated using the CT images acquired at two different energies with DECT using the stone map generated from the VNC images. The performance of DE ratio-based stone differentiation was evaluated at five different iodine concentrations (21, 42, 63, 84 and 105 mg/ml). The DE ratio of stones in iodine solution was found larger than those obtained in non-iodine cases. This is mainly caused by the partial volume effect around the boundary between the stone and iodine solution. The overestimation of the DE ratio leads to substantial overlap between different stone types. To address the partial volume effect, an expectation-maximization (EM) approach was implemented to estimate the contribution of iodine and stone within each image pixel in their mixture area. The DE ratio of each stone was corrected to maximally remove the influence of iodine solutions. The separation of uric-acid and non-uric-acid stone was improved in the presence of iodine solution.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Deep supervised dictionary learning for no-reference image quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin
2018-03-01
We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
Learning to rank for blind image quality assessment.
Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2015-10-01
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images
Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina
2017-01-01
Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885
Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J
2011-05-21
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
Suh, Young Joo; Kim, Young Jin; Kim, Jin Young; Chang, Suyon; Im, Dong Jin; Hong, Yoo Jin; Choi, Byoung Wook
2017-11-01
We aimed to determine the effect of a whole-heart motion-correction algorithm (new-generation snapshot freeze, NG SSF) on the image quality of cardiac computed tomography (CT) images in patients with mechanical valve prostheses compared to standard images without motion correction and to compare the diagnostic accuracy of NG SSF and standard CT image sets for the detection of prosthetic valve abnormalities. A total of 20 patients with 32 mechanical valves who underwent wide-coverage detector cardiac CT with single-heartbeat acquisition were included. The CT image quality for subvalvular (below the prosthesis) and valvular regions (valve leaflets) of mechanical valves was assessed by two observers on a four-point scale (1 = poor, 2 = fair, 3 = good, and 4 = excellent). Paired t-tests or Wilcoxon signed rank tests were used to compare image quality scores and the number of diagnostic phases (image quality score≥3) between the standard image sets and NG SSF image sets. Diagnostic performance for detection of prosthetic valve abnormalities was compared between two image sets with the final diagnosis set by re-operation or clinical findings as the standard reference. NG SSF image sets had better image quality scores than standard image sets for both valvular and subvalvular regions (P < 0.05 for both). The number of phases that were of diagnostic image quality per patient was significantly greater in the NG SSF image set than standard image set for both valvular and subvalvular regions (P < 0.0001). Diagnostic performance of NG SSF image sets for the detection of prosthetic abnormalities (20 pannus and two paravalvular leaks) was greater than that of standard image sets (P < 0.05). Application of NG SSF can improve CT image quality and diagnostic accuracy in patients with mechanical valves compared to standard images. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.