Sample records for image processing parameters

  1. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  2. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  3. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  4. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  5. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  6. WE-G-204-01: BEST IN PHYSICS (IMAGING): Effect of Image Processing Parameters On Nodule Detectability in Chest Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, K; Lu, Z; MacMahon, H

    Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less

  7. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  8. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  9. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, A; Little, K; Chung, J

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less

  10. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  11. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    PubMed

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  12. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  13. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  14. Reconstruction of biofilm images: combining local and global structural parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-10-20

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less

  15. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  16. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  17. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  18. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  19. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  20. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  1. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  2. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  3. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    PubMed

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-11-01

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  5. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  6. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  7. Display system for imaging scientific telemetric information

    NASA Technical Reports Server (NTRS)

    Zabiyakin, G. I.; Rykovanov, S. N.

    1979-01-01

    A system for imaging scientific telemetric information, based on the M-6000 minicomputer and the SIGD graphic display, is described. Two dimensional graphic display of telemetric information and interaction with the computer, in analysis and processing of telemetric parameters displayed on the screen is provided. The running parameter information output method is presented. User capabilities in the analysis and processing of telemetric information imaged on the display screen and the user language are discussed and illustrated.

  8. A simplified and powerful image processing methods to separate Thai jasmine rice and sticky rice varieties

    NASA Astrophysics Data System (ADS)

    Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya

    2018-03-01

    A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.

  9. A Graph Based Interface for Representing Volume Visualization Results

    NASA Technical Reports Server (NTRS)

    Patten, James M.; Ma, Kwan-Liu

    1998-01-01

    This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.

  10. Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kautz, Elizabeth J.; Jana, Saumyadeep; Devaraj, Arun

    2017-07-31

    This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).

  11. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  12. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  13. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  14. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  15. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  16. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  17. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  18. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    PubMed

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  19. A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer

    PubMed Central

    Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie

    2014-01-01

    Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727

  20. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  1. Efficient HIK SVM learning for image classification.

    PubMed

    Wu, Jianxin

    2012-10-01

    Histograms are used in almost every aspect of image processing and computer vision, from visual descriptors to image representations. Histogram intersection kernel (HIK) and support vector machine (SVM) classifiers are shown to be very effective in dealing with histograms. This paper presents contributions concerning HIK SVM for image classification. First, we propose intersection coordinate descent (ICD), a deterministic and scalable HIK SVM solver. ICD is much faster than, and has similar accuracies to, general purpose SVM solvers and other fast HIK SVM training methods. We also extend ICD to the efficient training of a broader family of kernels. Second, we show an important empirical observation that ICD is not sensitive to the C parameter in SVM, and we provide some theoretical analyses to explain this observation. ICD achieves high accuracies in many problems, using its default parameters. This is an attractive property for practitioners, because many image processing tasks are too large to choose SVM parameters using cross-validation.

  2. An indirect method of imaging the Stokes parameters of a submicron particle with sub-diffraction scattering

    NASA Astrophysics Data System (ADS)

    Ullah, Kaleem; Garcia-Camara, Braulio; Habib, Muhammad; Yadav, N. P.; Liu, Xuefeng

    2018-07-01

    In this work, we report an indirect way to image the Stokes parameters of a sample under test (SUT) with sub-diffraction scattering information. We apply our previously reported technique called parametric indirect microscopic imaging (PIMI) based on a fitting and filtration process to measure the Stokes parameters of a submicron particle. A comparison with a classical Stokes measurement is also shown. By modulating the incident field in a precise way, fitting and filtration process at each pixel of the detector in PIMI make us enable to resolve and sense the scattering information of SUT and map them in terms of the Stokes parameters. We believe that our finding can be very useful in fields like singular optics, optical nanoantenna, biomedicine and much more. The spatial signature of the Stokes parameters given by our method has been confirmed with finite difference time domain (FDTD) method.

  3. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  4. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  5. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  6. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.

    PubMed

    Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian

    2016-01-20

    This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  7. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  8. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  9. Metric Aspects of Digital Images and Digital Image Processing.

    DTIC Science & Technology

    1984-09-01

    produced in a reconstructed digital image. Synthesized aerial photographs were formed by processing a combined elevation and orthophoto data base. These...brightness values h1 and Iion b) a line equation whose two parameters are calculated h12, along with tile borderline that separates the two intensity

  10. Determination of Hydrodynamic Parameters on Two--Phase Flow Gas - Liquid in Pipes with Different Inclination Angles Using Image Processing Algorithm

    NASA Astrophysics Data System (ADS)

    Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda

    2009-11-01

    In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.

  11. An invertebrate embryologist's guide to routine processing of confocal images.

    PubMed

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  12. Skin texture parameters of the dorsal hand in evaluating skin aging in China.

    PubMed

    Gao, Qian; Hu, Li-Wen; Wang, Yang; Xu, Wen-Ying; Ouyang, Nan-Ning; Dong, Guo-Qing; Shi, Song-Tian; Liu, Yang

    2011-11-01

    There are various non-invasive methods in skin morphology for assessing skin aging. The use of digital photography will make it easier and more convenient. In this study, we explored some skin texture parameters for evaluating skin aging using digital image processing. Two hundred and twenty-eight subjects who lived in Sanya, China, were involved. Individual sun exposure history and other factors influencing skin aging were collected by a questionnaire. Meanwhile, we took photos of their dorsal hands. Skin images were graded according to the Beagley-Gibson system. These skin images were also processed using image analysis software. Five skin texture parameters, Angle Num., Angle Max., Angle Diff., Distance and Grids, were produced in reference to the Beagley-Gibson system. All texture parameters were significantly associated with the Beagley-Gibson score. Among the parameters, the distance between primary lines (Distance) and the value of angle formed by intersection textures (Angle Max., Angle Diff.) were positively associated with the Beagley-Gibson score. However, there was a negative correlation between the number of grids (Grids), the number of angle (Angle Num.) and the Beagley-Gibson score. These texture parameters were also correlated with factors influencing skin aging such as sun exposure, age, smoking, drinking and body mass index. In multivariate analysis, Grids and Distance were mainly affected by age. But Angle Max. and Angle Diff. were mainly affected by sun exposure. It seemed that the skin surface morphologic parameters presented in our study reflect skin aging changes to some extent and could be used to describe skin aging using digital image processing. © 2011 John Wiley & Sons A/S.

  13. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  14. RayPlus: a Web-Based Platform for Medical Image Processing.

    PubMed

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  15. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  16. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  17. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  18. Robust image modeling techniques with an image restoration application

    NASA Astrophysics Data System (ADS)

    Kashyap, Rangasami L.; Eom, Kie-Bum

    1988-08-01

    A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.

  19. Motion compensated image processing and optimal parameters for egg crack detection using modified pressure

    USDA-ARS?s Scientific Manuscript database

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  20. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  1. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  2. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  3. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  4. Classifying Physical Morphology of Cocoa Beans Digital Images using Multiclass Ensemble Least-Squares Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Adhitya, Yudhi

    2018-03-01

    The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.

  5. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  6. Automatic and quantitative measurement of laryngeal video stroboscopic images.

    PubMed

    Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han

    2017-01-01

    The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.

  7. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    NASA Astrophysics Data System (ADS)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  8. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  9. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  10. Quantification of chromatin condensation level by image processing.

    PubMed

    Irianto, Jerome; Lee, David A; Knight, Martin M

    2014-03-01

    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  12. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  13. A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging

    PubMed Central

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi

    2014-01-01

    Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862

  14. A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.

    PubMed

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi

    2014-01-01

    The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.

  15. MicroCT parameters for multimaterial elements assessment

    NASA Astrophysics Data System (ADS)

    de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.

    2018-03-01

    Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.

  16. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  17. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  18. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  19. Investigating Musical Disorders with Diffusion Tensor Imaging: a Comparison of Imaging Parameters

    PubMed Central

    Loui, Psyche; Schlaug, Gottfried

    2009-01-01

    The Arcuate Fasciculus (AF) is a bundle of white matter traditionally thought to be responsible for language function. However, its role in music is not known. Here we investigate the connectivity of the AF using Diffusion Tensor Imaging (DTI) and show that musically tone-deaf individuals, who show impairments in pitch discrimination, have reduced connectivity in the AF relative to musically normal-functioning control subjects. Results were robust to variations in imaging parameters and emphasize the importance of brain connectivity in para-linguistic processes such as music. PMID:19673766

  20. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  1. Process parameters in the manufacture of ceramic ZnO nanofibers made by electrospinning

    NASA Astrophysics Data System (ADS)

    Nonato, Renato C.; Morales, Ana R.; Rocha, Mateus C.; Nista, Silvia V. G.; Mei, Lucia H. I.; Bonse, Baltus C.

    2017-01-01

    Zinc oxide (ZnO) nanofibers were prepared by electrospinning under different conditions using a solution of poly(vinyl alcohol) and zinc acetate as precursor. A 23 factorial design was made to study the influence of the process parameters in the electrospinning (collector distance, flow rate and voltage), and a 22 factorial design was made to study the influence of the calcination process (time and temperature). SEM images were made to analyze the fiber morphology before and after calcination process, and the images were made to measure the nanofiber diameter. X-ray diffraction was made to analyze the total precursor conversion to ZnO and the elimination of the polymeric carrier.

  2. The potential of multiparametric MRI of the breast

    PubMed Central

    Pinker, Katja; Helbich, Thomas H

    2017-01-01

    MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423

  3. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  4. Estimation of forest biomass using remote sensing

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Latifur Rahman

    Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation

  5. Three-Dimensional Imaging of the Mouse Organ of Corti Cytoarchitecture for Mechanical Modeling

    NASA Astrophysics Data System (ADS)

    Puria, Sunil; Hartman, Byron; Kim, Jichul; Oghalai, John S.; Ricci, Anthony J.; Liberman, M. Charles

    2011-11-01

    Cochlear models typically use continuous anatomical descriptions and homogenized parameters based on two-dimensional images for describing the organ of Corti. To produce refined models based more closely on the actual cochlear cytoarchitecture, three-dimensional morphometric parameters of key mechanical structures are required. Towards this goal, we developed and compared three different imaging methods: (1) A fixed cochlear whole-mount preparation using the fluorescent dye Cellmask®, which is a molecule taken up by cell membranes and clearly delineates Deiters' cells, outer hair cells, and the phalangeal process, imaged using confocal microscopy; (2) An in situ fixed preparation with hair cells labeled using anti-prestin and supporting structures labeled using phalloidin, imaged using two-photon microscopy; and (3) A membrane-tomato (mT) mouse with fluorescent proteins expressed in all cell membranes, which enables two-photon imaging of an in situ live preparation with excellent visualization of the organ of Corti. Morphometric parameters including lengths, diameters, and angles, were extracted from 3D cellular surface reconstructions of the resulting images. Preliminary results indicate that the length of the phalangeal processes decreases from the first (inner most) to third (outer most) row of outer hair cells, and that their length also likely varies from base to apex and across species.

  6. Intelligent tuning method of PID parameters based on iterative learning control for atomic force microscopy.

    PubMed

    Liu, Hui; Li, Yingzi; Zhang, Yingxu; Chen, Yifu; Song, Zihang; Wang, Zhenyu; Zhang, Suoxin; Qian, Jianqiang

    2018-01-01

    Proportional-integral-derivative (PID) parameters play a vital role in the imaging process of an atomic force microscope (AFM). Traditional parameter tuning methods require a lot of manpower and it is difficult to set PID parameters in unattended working environments. In this manuscript, an intelligent tuning method of PID parameters based on iterative learning control is proposed to self-adjust PID parameters of the AFM according to the sample topography. This method gets enough information about the output signals of PID controller and tracking error, which will be used to calculate the proper PID parameters, by repeated line scanning until convergence before normal scanning to learn the topography. Subsequently, the appropriate PID parameters are obtained by fitting method and then applied to the normal scanning process. The feasibility of the method is demonstrated by the convergence analysis. Simulations and experimental results indicate that the proposed method can intelligently tune PID parameters of the AFM for imaging different topographies and thus achieve good tracking performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Image processing analysis on the air-water slug two-phase flow in a horizontal pipe

    NASA Astrophysics Data System (ADS)

    Dinaryanto, Okto; Widyatama, Arif; Majid, Akmal Irfan; Deendarlianto, Indarto

    2016-06-01

    Slug flow is a part of intermittent flow which is avoided in industrial application because of its irregularity and high pressure fluctuation. Those characteristics cause some problems such as internal corrosion and the damage of the pipeline construction. In order to understand the slug characteristics, some of the measurement techniques can be applied such as wire-mesh sensors, CECM, and high speed camera. The present study was aimed to determine slug characteristics by using image processing techniques. Experiment has been carried out in 26 mm i.d. acrylic horizontal pipe with 9 m long. Air-water flow was recorded 5 m from the air-water mixer using high speed video camera. Each of image sequence was processed using MATLAB. There are some steps including image complement, background subtraction, and image filtering that used in this algorithm to produce binary images. Special treatments also were applied to reduce the disturbance effect of dispersed bubble around the bubble. Furthermore, binary images were used to describe bubble contour and calculate slug parameter such as gas slug length, gas slug velocity, and slug frequency. As a result the effect of superficial gas velocity and superficial liquid velocity on the fundamental parameters can be understood. After comparing the results to the previous experimental results, the image processing techniques is a useful and potential technique to explain the slug characteristics.

  8. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  9. Study of spin-scan imaging for outer planets missions. [imaging techniques for Jupiter orbiter missions

    NASA Technical Reports Server (NTRS)

    Russell, E. E.; Chandos, R. A.; Kodak, J. C.; Pellicori, S. F.; Tomasko, M. G.

    1974-01-01

    The constraints that are imposed on the Outer Planet Missions (OPM) imager design are of critical importance. Imager system modeling analyses define important parameters and systematic means for trade-offs applied to specific Jupiter orbiter missions. Possible image sequence plans for Jupiter missions are discussed in detail. Considered is a series of orbits that allow repeated near encounters with three of the Jovian satellites. The data handling involved in the image processing is discussed, and it is shown that only minimal processing is required for the majority of images for a Jupiter orbiter mission.

  10. Dynamic single photon emission computed tomography—basic principles and cardiac applications

    PubMed Central

    Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F

    2011-01-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925

  11. TOPICAL REVIEW: Dynamic single photon emission computed tomography—basic principles and cardiac applications

    NASA Astrophysics Data System (ADS)

    Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.

    2010-10-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.

  12. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  13. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  15. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  16. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    PubMed

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  18. Image parameters for maturity determination of a composted material containing sewage sludge

    NASA Astrophysics Data System (ADS)

    Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.

    2013-07-01

    Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.

  19. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  20. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  1. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  2. Direct Metal Deposition of H13 Tool Steel on Copper Alloy Substrate: Parametric Investigation

    NASA Astrophysics Data System (ADS)

    Imran, M. Khalid; Masood, S. H.; Brandt, Milan

    2015-12-01

    Over the past decade, researchers have demonstrated interest in tribology and prototyping by the laser aided material deposition process. Laser aided direct metal deposition (DMD) enables the formation of a uniform clad by melting the powder to form desired component from metal powder materials. In this research H13 tool steel has been used to clad on a copper alloy substrate using DMD. The effects of laser parameters on the quality of DMD deposited clad have been investigated and acceptable processing parameters have been determined largely through trial-and-error approaches. The relationships between DMD process parameters and the product characteristics such as porosity, micro-cracks and microhardness have been analysed using scanning electron microscope (SEM), image analysis software (ImageJ) and microhardness tester. It has been found that DMD parameters such as laser power, powder mass flow rate, feed rate and focus size have an important role in clad quality and crack formation.

  3. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.

  4. The Design and Development of Test Platform for Wheat Precision Seeding Based on Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie

    The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.

  5. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  6. Research on polarization imaging information parsing method

    NASA Astrophysics Data System (ADS)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  7. Surface defect detection in tiling Industries using digital image processing methods: analysis and evaluation.

    PubMed

    Karimi, Mohammad H; Asemani, Davud

    2014-05-01

    Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A technique for automatically extracting useful field of view and central field of view images.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  9. VESGEN Software for Mapping and Quantification of Vascular Regulators

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.

    2012-01-01

    VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.

  10. Super-resolution for everybody: An image processing workflow to obtain high-resolution images with a standard confocal microscope.

    PubMed

    Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne

    2017-02-15

    In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images

    NASA Astrophysics Data System (ADS)

    Levesque, M.

    Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.

  12. 3D space positioning and image feature extraction for workpiece

    NASA Astrophysics Data System (ADS)

    Ye, Bing; Hu, Yi

    2008-03-01

    An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.

  13. Segmentation of anatomical structures of the heart based on echocardiography

    NASA Astrophysics Data System (ADS)

    Danilov, V. V.; Skirnevskiy, I. P.; Gerget, O. M.

    2017-01-01

    Nowadays, many practical applications in the field of medical image processing require valid and reliable segmentation of images in the capacity of input data. Some of the commonly used imaging techniques are ultrasound, CT, and MRI. However, the main difference between the other medical imaging equipment and EchoCG is that it is safer, low cost, non-invasive and non-traumatic. Three-dimensional EchoCG is a non-invasive imaging modality that is complementary and supplementary to two-dimensional imaging and can be used to examine the cardiovascular function and anatomy in different medical settings. The challenging problems, presented by EchoCG image processing, such as speckle phenomena, noise, temporary non-stationarity of processes, unsharp boundaries, attenuation, etc. forced us to consider and compare existing methods and then to develop an innovative approach that can tackle the problems connected with clinical applications. Actual studies are related to the analysis and development of a cardiac parameters automatic detection system by EchoCG that will provide new data on the dynamics of changes in cardiac parameters and improve the accuracy and reliability of the diagnosis. Research study in image segmentation has highlighted the capabilities of image-based methods for medical applications. The focus of the research is both theoretical and practical aspects of the application of the methods. Some of the segmentation approaches can be interesting for the imaging and medical community. Performance evaluation is carried out by comparing the borders, obtained from the considered methods to those manually prescribed by a medical specialist. Promising results demonstrate the possibilities and the limitations of each technique for image segmentation problems. The developed approach allows: to eliminate errors in calculating the geometric parameters of the heart; perform the necessary conditions, such as speed, accuracy, reliability; build a master model that will be an indispensable assistant for operations on a beating heart.

  14. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  15. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  16. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  17. Using hyperspectral imaging technology to identify diseased tomato leaves

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Zhao, Xueguan; Meng, Zhijun; Zou, Wei

    2016-11-01

    In the process of tomato plants growth, due to the effect of plants genetic factors, poor environment factors, or disoperation of parasites, there will generate a series of unusual symptoms on tomato plants from physiology, organization structure and external form, as a result, they cannot grow normally, and further to influence the tomato yield and economic benefits. Hyperspectral image usually has high spectral resolution, not only contains spectral information, but also contains the image information, so this study adopted hyperspectral imaging technology to identify diseased tomato leaves, and developed a simple hyperspectral imaging system, including a halogen lamp light source unit, a hyperspectral image acquisition unit and a data processing unit. Spectrometer detection wavelength ranged from 400nm to 1000nm. After hyperspectral images of tomato leaves being captured, it was needed to calibrate hyperspectral images. This research used spectrum angle matching method and spectral red edge parameters discriminant method respectively to identify diseased tomato leaves. Using spectral red edge parameters discriminant method produced higher recognition accuracy, the accuracy was higher than 90%. Research results have shown that using hyperspectral imaging technology to identify diseased tomato leaves is feasible, and provides the discriminant basis for subsequent disease control of tomato plants.

  18. Chain of evidence generation for contrast enhancement in digital image forensics

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela

    2010-01-01

    The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.

  19. Nondestructive, fast, and cost-effective image processing method for roughness measurement of randomly rough metallic surfaces.

    PubMed

    Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen

    2018-06-01

    In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.

  20. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  1. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  2. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  3. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  4. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  5. MIDAS - ESO's new image processing system

    NASA Astrophysics Data System (ADS)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  6. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  7. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  8. An automatic alignment tool to improve repeatability of left ventricular function and dyssynchrony parameters in serial gated myocardial perfusion SPECT studies

    PubMed Central

    Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji

    2013-01-01

    Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996

  9. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  10. Classification of rice grain varieties arranged in scattered and heap fashion using image processing

    NASA Astrophysics Data System (ADS)

    Bhat, Sudhanva; Panat, Sreedath; N, Arunachalam

    2017-03-01

    Inspection and classification of food grains is a manual process in many of the food grain processing industries. Automation of such a process is going to be beneficial for industries facing shortage of skilled workforce. Machine Vision techniques are some of the popular approaches for developing such automations. Most of the existing works on the topic deal with identification of the rice variety by analyzing images of well separated and isolated rice grains from which a lot of geometrical features can be extracted. This paper proposes techniques to estimate geometrical parameters from the images of scattered as well as heaped rice grains where the grain boundaries are not clearly identifiable. A methodology based on convexity is proposed to separate touching rice grains in the scattered rice grain images and get their geometrical parameters. And in case of heaped arrangement a Pixel-Distance Contribution Function is defined and is used to get points inside rice grains and then to find the boundary points of rice grains. These points are fit with the equation of an ellipse to estimate their lengths and breadths. The proposed techniques are applied on images of scattered and heaped rice grains of different varieties. It is shown that each variety gives a unique set of results.

  11. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging.

    PubMed

    Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M

    2018-01-01

    Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.

  12. Galaxy evolution in the densest environments: HST imaging

    NASA Astrophysics Data System (ADS)

    Jorgensen, Inger

    2013-10-01

    We propose to process in a consistent fashion all available HST/ACS and WFC3 imaging of seven rich clusters of galaxies at z=1.2-1.6. The clusters are part of our larger project aimed at constraining models for galaxy evolution in dense environments from observations of stellar populations in rich z=1.2-2 galaxy clusters. The main objective is to establish the star formation {SF} history and structural evolution over this epoch during which large changes in SF rates and galaxy structure are expected to take place in cluster galaxies.The observational data required to meet our main objective are deep HST imaging and high S/N spectroscopy of individual cluster members. The HST imaging already exists for the seven rich clusters at z=1.2-1.6 included in this archive proposal. However, the data have not been consistently processed to derive colors, magnitudes, sizes and morphological parameters for all potential cluster members bright enough to be suitable for spectroscopic observations with 8-m class telescopes. We propose to carry out this processing and make all derived parameters publicly available. We will use the parameters derived from the HST imaging to {1} study the structural evolution of the galaxies, {2} select clusters and galaxies for spectroscopic observations, and {3} use the photometry and spectroscopy together for a unified analysis aimed at the SF history and structural changes. The analysis will also utilize data from the Gemini/HST Cluster Galaxy Project, which covers rich clusters at z=0.2-1.0 and for which we have similar HST imaging and high S/N spectroscopy available.

  13. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  14. A physiology-based parametric imaging method for FDG-PET data

    NASA Astrophysics Data System (ADS)

    Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele

    2017-12-01

    Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.

  15. A Procedure for High Resolution Satellite Imagery Quality Assessment

    PubMed Central

    Crespi, Mattia; De Vendictis, Laura

    2009-01-01

    Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312

  16. Complex Spiral Structure in the HD 100546 Transitional Disk as Revealed by GPI and MagAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follette, Katherine B.; Macintosh, Bruce; Mullen, Wyatt

    We present optical and near-infrared high-contrast images of the transitional disk HD 100546 taken with the Magellan Adaptive Optics system (MagAO) and the Gemini Planet Imager (GPI). GPI data include both polarized intensity and total intensity imagery, and MagAO data are taken in Simultaneous Differential Imaging mode at H α . The new GPI H -band total intensity data represent a significant enhancement in sensitivity and field rotation compared to previous data sets and enable a detailed exploration of substructure in the disk. The data are processed with a variety of differential imaging techniques (polarized, angular, reference, and simultaneous differentialmore » imaging) in an attempt to identify the disk structures that are most consistent across wavelengths, processing techniques, and algorithmic parameters. The inner disk cavity at 15 au is clearly resolved in multiple data sets, as are a variety of spiral features. While the cavity and spiral structures are identified at levels significantly distinct from the neighboring regions of the disk under several algorithms and with a range of algorithmic parameters, emission at the location of HD 100546 “ c ” varies from point-like under aggressive algorithmic parameters to a smooth continuous structure with conservative parameters, and is consistent with disk emission. Features identified in the HD 100546 disk bear qualitative similarity to computational models of a moderately inclined two-armed spiral disk, where projection effects and wrapping of the spiral arms around the star result in a number of truncated spiral features in forward-modeled images.« less

  17. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  18. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  19. Image processing analysis of geospatial uav orthophotos for palm oil plantation monitoring

    NASA Astrophysics Data System (ADS)

    Fahmi, F.; Trianda, D.; Andayani, U.; Siregar, B.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) is one of the tools that can be used to monitor palm oil plantation remotely. With the geospatial orthophotos, it is possible to identify which part of the plantation land is fertile for planted crops, means to grow perfectly. It is also possible furthermore to identify less fertile in terms of growth but not perfect, and also part of plantation field that is not growing at all. This information can be easily known quickly with the use of UAV photos. In this study, we utilized image processing algorithm to process the orthophotos for more accurate and faster analysis. The resulting orthophotos image were processed using Matlab including classification of fertile, infertile, and dead palm oil plants by using Gray Level Co-Occurrence Matrix (GLCM) method. The GLCM method was developed based on four direction parameters with specific degrees 0°, 45°, 90°, and 135°. From the results of research conducted with 30 image samples, it was found that the accuracy of the system can be reached by using the features extracted from the matrix as parameters Contras, Correlation, Energy, and Homogeneity.

  20. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  1. Improving fault image by determination of optimum seismic survey parameters using ray-based modeling

    NASA Astrophysics Data System (ADS)

    Saffarzadeh, Sadegh; Javaherian, Abdolrahim; Hasani, Hossein; Talebi, Mohammad Ali

    2018-06-01

    In complex structures such as faults, salt domes and reefs, specifying the survey parameters is more challenging and critical owing to the complicated wave field behavior involved in such structures. In the petroleum industry, detecting faults has become crucial for reservoir potential where faults can act as traps for hydrocarbon. In this regard, seismic survey modeling is employed to construct a model close to the real structure, and obtain very realistic synthetic seismic data. Seismic modeling software, the velocity model and parameters pre-determined by conventional methods enable a seismic survey designer to run a shot-by-shot virtual survey operation. A reliable velocity model of structures can be constructed by integrating the 2D seismic data, geological reports and the well information. The effects of various survey designs can be investigated by the analysis of illumination maps and flower plots. Also, seismic processing of the synthetic data output can describe the target image using different survey parameters. Therefore, seismic modeling is one of the most economical ways to establish and test the optimum acquisition parameters to obtain the best image when dealing with complex geological structures. The primary objective of this study is to design a proper 3D seismic survey orientation to achieve fault zone structures through ray-tracing seismic modeling. The results prove that a seismic survey designer can enhance the image of fault planes in a seismic section by utilizing the proposed modeling and processing approach.

  2. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  3. Development of an inexpensive optical method for studies of dental erosion process in vitro

    NASA Astrophysics Data System (ADS)

    Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.

    2008-09-01

    Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.

  4. Influence of additive laser manufacturing parameters on surface using density of partially melted particles

    NASA Astrophysics Data System (ADS)

    Rosa, Benoit; Brient, Antoine; Samper, Serge; Hascoët, Jean-Yves

    2016-12-01

    Mastering the additive laser manufacturing surface is a real challenge and would allow functional surfaces to be obtained without finishing. Direct Metal Deposition (DMD) surfaces are composed by directional and chaotic textures that are directly linked to the process principles. The aim of this work is to obtain surface topographies by mastering the operating process parameters. Based on experimental investigation, the influence of operating parameters on the surface finish has been modeled. Topography parameters and multi-scale analysis have been used in order to characterize the DMD obtained surfaces. This study also proposes a methodology to characterize DMD chaotic texture through topography filtering and 3D image treatment. In parallel, a new parameter is proposed: density of particles (D p). Finally, this study proposes a regression modeling between process parameters and density of particles parameter.

  5. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  6. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  7. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  8. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    PubMed

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  9. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  10. Numerical Approaches about the Morphological Description Parameters for the Manganese Deposits on the Magnesite Ore Surface

    NASA Astrophysics Data System (ADS)

    Bayirli, Mehmet; Ozbey, Tuba

    2013-07-01

    Black deposits usually found at the surface of magnesite ore or limestone as well as red deposits in quartz veins are named as natural manganese dendrites. According to their geometrical structures, they may take variable fractal shapes. The characteristic origins of these morphologies have rarely been studied by means of numerical analyses. Hence, digital images of magnesite ore are taken from its surface with a scanner. These images are then converted to binary images in the form of 8 bits, bitmap format. As a next step, the morphological description parameters of manganese dendrites are computed by the way of scaling methods such as occupied fractions, fractal dimensions, divergent ratios, and critical exponents of scaling. The fractal dimension and the scaling range are made dependent on the fraction of the particles. Morphological description parameters can be determined according to the geometrical evaluation of the natural manganese dendrites which are formed independently from the process. The formation of manganese dendrites may also explain the stochastic selected process in the nature. These results therefore may be useful to understand the deposits in quartz vein parameters in geophysics.

  11. Mueller matrix imaging and analysis of cancerous cells

    NASA Astrophysics Data System (ADS)

    Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.

    2017-08-01

    Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.

  12. Utilizing remote sensing of thematic mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.

    1987-01-01

    A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.

  13. High-throughput imaging of heterogeneous cell organelles with an X-ray laser (CXIDB ID 25)

    DOE Data Explorer

    Hantke, Max, F.

    2014-11-17

    Preprocessed detector images that were used for the paper "High-throughput imaging of heterogeneous cell organelles with an X-ray laser". The CXI file contains the entire recorded data - including both hits and blanks. It also includes down-sampled images and LCLS machine parameters. Additionally, the Cheetah configuration file is attached that was used to create the pre-processed data.

  14. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  15. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  16. The artificial object detection and current velocity measurement using SAR ocean surface images

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey

    2017-10-01

    Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.

  17. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  18. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  19. a Fast Approach for Stitching of Aerial Images

    NASA Astrophysics Data System (ADS)

    Moussa, A.; El-Sheimy, N.

    2016-06-01

    The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.

  20. Infrared Thermography For Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Lucky, Brian D.; Spiegel, Lyle B.; Hudyma, Russell M.

    1992-01-01

    Infrared imaging and image-data-processing system shows temperatures of joint during welding and provides data from which rates of heating and cooling determined. Information used to control welding parameters to ensure reliable joints, in materials which microstructures and associated metallurgical and mechanical properties depend strongly on rates of heating and cooling. Applicable to variety of processes, including tungsten/inert-gas welding; plasma, laser, and resistance welding; cutting; and brazing.

  1. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  2. Normalized Temperature Contrast Processing in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  3. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  4. Parameter Estimation and Image Reconstruction of Rotating Targets with Vibrating Interference in the Terahertz Band

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

    2017-07-01

    Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.

  5. Image-based information, communication, and retrieval

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1980-01-01

    IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.

  6. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.

    2008-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  7. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.

    2010-06-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  8. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  9. Volta phase plate data collection facilitates image processing and cryo-EM structure determination.

    PubMed

    von Loeffelholz, Ottilie; Papai, Gabor; Danev, Radostin; Myasnikov, Alexander G; Natchiar, S Kundhavai; Hazemann, Isabelle; Ménétret, Jean-François; Klaholz, Bruno P

    2018-06-01

    A current bottleneck in structure determination of macromolecular complexes by cryo electron microscopy (cryo-EM) is the large amount of data needed to obtain high-resolution 3D reconstructions, including through sorting into different conformations and compositions with advanced image processing. Additionally, it may be difficult to visualize small ligands that bind in sub-stoichiometric levels. Volta phase plates (VPP) introduce a phase shift in the contrast transfer and drastically increase the contrast of the recorded low-dose cryo-EM images while preserving high frequency information. Here we present a comparative study to address the behavior of different data sets during image processing and quantify important parameters during structure refinement. The automated data collection was done from the same human ribosome sample either as a conventional defocus range dataset or with a Volta phase plate close to focus (cfVPP) or with a small defocus (dfVPP). The analysis of image processing parameters shows that dfVPP data behave more robustly during cryo-EM structure refinement because particle alignments, Euler angle assignments and 2D & 3D classifications behave more stably and converge faster. In particular, less particle images are required to reach the same resolution in the 3D reconstructions. Finally, we find that defocus range data collection is also applicable to VPP. This study shows that data processing and cryo-EM map interpretation, including atomic model refinement, are facilitated significantly by performing VPP cryo-EM, which will have an important impact on structural biology. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Image processing analysis of nuclear track parameters for CR-39 detector irradiated by thermal neutron

    NASA Astrophysics Data System (ADS)

    Al-Jobouri, Hussain A.; Rajab, Mustafa Y.

    2016-03-01

    CR-39 detector which covered with boric acid (H3Bo3) pellet was irradiated by thermal neutrons from (241Am - 9Be) source with activity 12Ci and neutron flux 105 n. cm-2. s-1. The irradiation times -TD for detector were 4h, 8h, 16h and 24h. Chemical etching solution for detector was sodium hydroxide NaOH, 6.25N with 45 min etching time and 60 C˚ temperature. Images of CR-39 detector after chemical etching were taken from digital camera which connected from optical microscope. MATLAB software version 7.0 was used to image processing. The outputs of image processing of MATLAB software were analyzed and found the following relationships: (a) The irradiation time -TD has behavior linear relationships with following nuclear track parameters: i) total track number - NT ii) maximum track number - MRD (relative to track diameter - DT) at response region range 2.5 µm to 4 µm iii) maximum track number - MD (without depending on track diameter - DT). (b) The irradiation time -TD has behavior logarithmic relationship with maximum track number - MA (without depending on track area - AT). The image processing technique principally track diameter - DT can be take into account to classification of α-particle emitters, In addition to the contribution of these technique in preparation of nano- filters and nano-membrane in nanotechnology fields.

  11. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  12. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  13. Automated optical testing of LWIR objective lenses using focal plane array sensors

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen

    2012-10-01

    The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.

  14. Computer assisted analysis of auroral images obtained from high altitude polar satellites

    NASA Technical Reports Server (NTRS)

    Samadani, Ramin; Flynn, Michael

    1993-01-01

    Automatic techniques that allow the extraction of physically significant parameters from auroral images were developed. This allows the processing of a much larger number of images than is currently possible with manual techniques. Our techniques were applied to diverse auroral image datasets. These results were made available to geophysicists at NASA and at universities in the form of a software system that performs the analysis. After some feedback from users, an upgraded system was transferred to NASA and to two universities. The feasibility of user-trained search and retrieval of large amounts of data using our automatically derived parameter indices was demonstrated. Techniques based on classification and regression trees (CART) were developed and applied to broaden the types of images to which the automated search and retrieval may be applied. Our techniques were tested with DE-1 auroral images.

  15. Angle-domain common-image gathers from anisotropic Gaussian beam migration and its application to anisotropy-induced imaging errors analysis

    NASA Astrophysics Data System (ADS)

    Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng

    2017-02-01

    An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.

  16. syris: a flexible and efficient framework for X-ray imaging experiments simulation.

    PubMed

    Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo

    2017-11-01

    An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.

  17. Radiation levels and image quality in patients undergoing chest X-ray examinations

    NASA Astrophysics Data System (ADS)

    de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto

    2017-11-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.

  18. Normalized Polarization Ratios for the Analysis of Cell Polarity

    PubMed Central

    Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.

    2014-01-01

    The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926

  19. Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit

    USGS Publications Warehouse

    Morfitt, Ron; Barsi, Julia A.; Levy, Raviv; Markham, Brian L.; Micijevic, Esad; Ong, Lawrence; Scaramuzza, Pat; Vanderwerff, Kelly

    2015-01-01

    Expectations of the Operational Land Imager (OLI) radiometric performance onboard Landsat-8 have been met or exceeded. The calibration activities that occurred prior to launch provided calibration parameters that enabled ground processing to produce imagery that met most requirements when data were transmitted to the ground. Since launch, calibration updates have improved the image quality even more, so that all requirements are met. These updates range from detector gain coefficients to reduce striping and banding to alignment parameters to improve the geometric accuracy. This paper concentrates on the on-orbit radiometric performance of the OLI, excepting the radiometric calibration performance. Topics discussed in this paper include: signal-to-noise ratios that are an order of magnitude higher than previous Landsat missions; radiometric uniformity that shows little residual banding and striping, and continues to improve; a dynamic range that limits saturation to extremely high radiance levels; extremely stable detectors; slight nonlinearity that is corrected in ground processing; detectors that are stable and 100% operable; and few image artifacts.

  20. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  1. Automatic tool alignment in a backscatter X-ray scanning system

    DOEpatents

    Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.

    2015-11-17

    Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a medical device is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.

  2. Automatic tool alignment in a backscatter x-ray scanning system

    DOEpatents

    Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.

    2015-06-16

    Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a tool is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.

  3. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  4. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  5. Panorama parking assistant system with improved particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  6. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

    PubMed

    Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

    2012-01-01

    In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

  7. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  8. Development of image analysis software for quantification of viable cells in microchips.

    PubMed

    Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland

    2018-01-01

    Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.

  9. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  10. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less

  11. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  12. Optimization of a hardware implementation for pulse coupled neural networks for image applications

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal

    2010-04-01

    Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.

  13. On the fallacy of quantitative segmentation for T1-weighted MRI

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Harrigan, Robert L.; Newton, Allen T.; Rane, Swati; Pallavaram, Srivatsan; D'Haese, Pierre F.; Dawant, Benoit M.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure "similar" contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply "T1-weighted". Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but "normal study-to-study variation" in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.

  14. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  15. Spatio-temporal diffusion of dynamic PET images

    NASA Astrophysics Data System (ADS)

    Tauber, C.; Stute, S.; Chau, M.; Spiteri, P.; Chalon, S.; Guilloteau, D.; Buvat, I.

    2011-10-01

    Positron emission tomography (PET) images are corrupted by noise. This is especially true in dynamic PET imaging where short frames are required to capture the peak of activity concentration after the radiotracer injection. High noise results in a possible bias in quantification, as the compartmental models used to estimate the kinetic parameters are sensitive to noise. This paper describes a new post-reconstruction filter to increase the signal-to-noise ratio in dynamic PET imaging. It consists in a spatio-temporal robust diffusion of the 4D image based on the time activity curve (TAC) in each voxel. It reduces the noise in homogeneous areas while preserving the distinct kinetics in regions of interest corresponding to different underlying physiological processes. Neither anatomical priors nor the kinetic model are required. We propose an automatic selection of the scale parameter involved in the diffusion process based on a robust statistical analysis of the distances between TACs. The method is evaluated using Monte Carlo simulations of brain activity distributions. We demonstrate the usefulness of the method and its superior performance over two other post-reconstruction spatial and temporal filters. Our simulations suggest that the proposed method can be used to significantly increase the signal-to-noise ratio in dynamic PET imaging.

  16. Geometric correction method for 3d in-line X-ray phase contrast image reconstruction

    PubMed Central

    2014-01-01

    Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768

  17. Quantitative optical scanning tests of complex microcircuits

    NASA Technical Reports Server (NTRS)

    Erickson, J. J.

    1980-01-01

    An approach for the development of the optical scanner as a screening inspection instrument for microcircuits involves comparing the quantitative differences in photoresponse images and then correlating them with electrical parameter differences in test devices. The existing optical scanner was modified so that the photoresponse data could be recorded and subsequently digitized. A method was devised for applying digital image processing techniques to the digitized photoresponse data in order to quantitatively compare the data. Electrical tests were performed and photoresponse images were recorded before and following life test intervals on two groups of test devices. Correlations were made between differences or changes in the electrical parameters of the test devices.

  18. Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2017-10-01

    The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.

  19. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  20. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  1. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.

  2. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  3. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  4. A theory of fine structure image models with an application to detection and classification of dementia.

    PubMed

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  5. Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata

    2014-09-01

    Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.

  6. Textural analyses of carbon fiber materials by 2D-FFT of complex images obtained by high frequency eddy current imaging (HF-ECI)

    NASA Astrophysics Data System (ADS)

    Schulze, Martin H.; Heuer, Henning

    2012-04-01

    Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.

  7. Calibration of imaging parameters for space-borne airglow photography using city light positions

    NASA Astrophysics Data System (ADS)

    Hozumi, Yuta; Saito, Akinori; Ejiri, Mitsumu K.

    2016-09-01

    A new method for calibrating imaging parameters of photographs taken from the International Space Station (ISS) is presented in this report. Airglow in the mesosphere and the F-region ionosphere was captured on the limb of the Earth with a digital single-lens reflex camera from the ISS by astronauts. To utilize the photographs as scientific data, imaging parameters, such as the angle of view, exact position, and orientation of the camera, should be determined because they are not measured at the time of imaging. A new calibration method using city light positions shown in the photographs was developed to determine these imaging parameters with high accuracy suitable for airglow study. Applying the pinhole camera model, the apparent city light positions on the photograph are matched with the actual city light locations on Earth, which are derived from the global nighttime stable light map data obtained by the Defense Meteorological Satellite Program satellite. The correct imaging parameters are determined in an iterative process by matching the apparent positions on the image with the actual city light locations. We applied this calibration method to photographs taken on August 26, 2014, and confirmed that the result is correct. The precision of the calibration was evaluated by comparing the results from six different photographs with the same imaging parameters. The precisions in determining the camera position and orientation are estimated to be ±2.2 km and ±0.08°, respectively. The 0.08° difference in the orientation yields a 2.9-km difference at a tangential point of 90 km in altitude. The airglow structures in the photographs were mapped to geographical points using the calibrated imaging parameters and compared with a simultaneous observation by the Visible and near-Infrared Spectral Imager of the Ionosphere, Mesosphere, Upper Atmosphere, and Plasmasphere mapping mission installed on the ISS. The comparison shows good agreements and supports the validity of the calibration. This calibration technique makes it possible to utilize photographs taken on low-Earth-orbit satellites in the nighttime as a reference for the airglow and aurora structures.[Figure not available: see fulltext.

  8. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.

  9. Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.

    PubMed

    Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A

    2003-07-01

    Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.

  10. On-line monitoring of fluid bed granulation by photometric imaging.

    PubMed

    Soppela, Ira; Antikainen, Osmo; Sandler, Niklas; Yliruusi, Jouko

    2014-11-01

    This paper introduces and discusses a photometric surface imaging approach for on-line monitoring of fluid bed granulation. Five granule batches consisting of paracetamol and varying amounts of lactose and microcrystalline cellulose were manufactured with an instrumented fluid bed granulator. Photometric images and NIR spectra were continuously captured on-line and particle size information was extracted from them. Also key process parameters were recorded. The images provided direct real-time information on the growth, attrition and packing behaviour of the batches. Moreover, decreasing image brightness in the drying phase was found to indicate granule drying. The changes observed in the image data were also linked to the moisture and temperature profiles of the processes. Combined with complementary process analytical tools, photometric imaging opens up possibilities for improved real-time evaluation fluid bed granulation. Furthermore, images can give valuable insight into the behaviour of excipients or formulations during product development. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. THELMA: a mobile app for crowdsourcing environmental data

    NASA Astrophysics Data System (ADS)

    Hintz, Kenneth J.; Hintz, Christopher J.; Almomen, Faris; Adounvo, Christian; D'Amato, Michael

    2014-06-01

    The collection of environmental light pollution data related to sea turtle nesting sites is a laborious and time consuming effort entailing the use of several pieces of measurement equipment, their transportation and calibration, the manual logging of results in the field, and subsequent transfer of the data to a computer for post-collection analysis. Serendipitously, the current generation of mobile smart phones (e.g., iPhone® 5) contains the requisite measurement capability, namely location data in aided GPS coordinates, magnetic compass heading, and elevation at the time an image is taken, image parameter data, and the image itself. The Turtle Habitat Environmental Light Measurement App (THELMA) is a mobile phone app whose graphical user interface (GUI) guides an untrained user through the image acquisition process in order to capture 360° of images with pointing guidance. It subsequently uploads the user-tagged images, all of the associated image parameters, and position, azimuth, elevation metadata to a central internet repository. Provision is also made for the capture of calibration images and the review of images before upload. THELMA allows for inexpensive, highly-efficient, worldwide crowdsourcing of calibratable beachfront lighting/light pollution data collected by untrained volunteers. This data can be later processed, analyzed, and used by scientists conducting sea turtle conservation in order to identify beach locations with hazardous levels of light pollution that may alter sea turtle behavior and necessitate human intervention after hatchling emergence.

  12. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  13. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  14. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  15. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  16. Solar corona during the total solar eclipse of 2009. (Czech Title: Sluneční koróna během zatmění Slunce v roce 2009)

    NASA Astrophysics Data System (ADS)

    Marková, E.; Bělík, M.; Křivský, L.; Druckmüller, M.

    2010-12-01

    This work is focused on primary processing of the solar eclipse observations of July 22, 2009. As part of the "Shadow-tracking expedition" project several expeditions were organized to observe the phenomenon. Unfortunately, bad weather conditions prevented a successful observation in the China region. Pre-processing was carried out from images taken at Envetak Atoll in Marshall Islands. From the isophot evolution a corona flattening was found, and from the processed fine structure images a parameter called "source area radius", used mainly for calculations in models of the coronal magnetic fields, was determined. Both of these parameters supplement the data obtained during the previous eclipses, and the first conclusions on the state of the corona during an eclipse are deduced.

  17. 40 CFR 63.11094 - What are my recordkeeping requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...)(1) of this section is an exact duplicate image of the original paper record with certifying... section is an exact duplicate image of the original paper record with certifying signatures. (ii) The... approval to use a vapor processing system or monitor an operating parameter other than those specified in...

  18. 40 CFR 63.11094 - What are my recordkeeping requirements?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...)(1) of this section is an exact duplicate image of the original paper record with certifying... section is an exact duplicate image of the original paper record with certifying signatures. (ii) The... approval to use a vapor processing system or monitor an operating parameter other than those specified in...

  19. 40 CFR 63.11094 - What are my recordkeeping requirements?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...)(1) of this section is an exact duplicate image of the original paper record with certifying... section is an exact duplicate image of the original paper record with certifying signatures. (ii) The... approval to use a vapor processing system or monitor an operating parameter other than those specified in...

  20. 40 CFR 63.11094 - What are my recordkeeping requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...)(1) of this section is an exact duplicate image of the original paper record with certifying... section is an exact duplicate image of the original paper record with certifying signatures. (ii) The... approval to use a vapor processing system or monitor an operating parameter other than those specified in...

  1. A research on radiation calibration of high dynamic range based on the dual channel CMOS

    NASA Astrophysics Data System (ADS)

    Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua

    2017-10-01

    The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.

  2. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  3. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    PubMed

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.

  4. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  5. The Pedestrian Detection Method Using an Extension Background Subtraction about the Driving Safety Support Systems

    NASA Astrophysics Data System (ADS)

    Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru

    In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.

  6. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT

    PubMed Central

    Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.

    2011-01-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552

  7. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.

    PubMed

    Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T

    2012-02-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.

  8. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  9. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  10. Semi-automation of Doppler Spectrum Image Analysis for Grading Aortic Valve Stenosis Severity.

    PubMed

    Niakšu, O; Balčiunaitė, G; Kizlaitis, R J; Treigys, P

    2016-01-01

    Doppler echocardiography analysis has become a golden standard in the modern diagnosis of heart diseases. In this paper, we propose a set of techniques for semi-automated parameter extraction for aortic valve stenosis severity grading. The main objectives of the study is to create echocardiography image processing techniques, which minimize manual image processing work of clinicians and leads to reduced human error rates. Aortic valve and left ventricle output tract spectrogram images have been processed and analyzed. A novel method was developed to trace systoles and to extract diagnostic relevant features. The results of the introduced method have been compared to the findings of the participating cardiologists. The experimental results showed the accuracy of the proposed method is comparable to the manual measurement performed by medical professionals. Linear regression analysis of the calculated parameters and the measurements manually obtained by the cardiologists resulted in the strongly correlated values: peak systolic velocity's and mean pressure gradient's R2 both equal to 0.99, their means' differences equal to 0.02 m/s and 4.09 mmHg, respectively, and aortic valve area's R2 of 0.89 with the two methods means' difference of 0.19 mm. The introduced Doppler echocardiography images processing method can be used as a computer-aided assistance in the aortic valve stenosis diagnostics. In our future work, we intend to improve precision of left ventricular outflow tract spectrogram measurements and apply data mining methods to propose a clinical decision support system for diagnosing aortic valve stenosis.

  11. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  12. Optical sectioning microscopy using two-frame structured illumination and Hilbert-Huang data processing

    NASA Astrophysics Data System (ADS)

    Trusiak, M.; Patorski, K.; Tkaczyk, T.

    2014-12-01

    We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).

  13. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  14. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  15. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  16. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  17. Event-based image recognition applied in tennis training assistance

    NASA Astrophysics Data System (ADS)

    Wawrzyniak, Zbigniew M.; Kowalski, Adam

    2016-09-01

    This paper presents a concept of a real-time system for individual tennis training assistance. The system is supposed to provide user (player) with information on his strokes accuracy as well as other training quality parameters such as velocity and rotation of the ball during its flight. The method is based on image processing methods equipped with developed explorative analysis of the events and their description by parameters of the movement. There has been presented the concept for further deployment to create a complete system that could assist tennis player during individual training.

  18. A data processing method based on tracking light spot for the laser differential confocal component parameters measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin

    2013-12-01

    We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.

  19. Mobile-based text recognition from water quality devices

    NASA Astrophysics Data System (ADS)

    Dhakal, Shanti; Rahnemoonfar, Maryam

    2015-03-01

    Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.

  20. A light field microscope imaging spectrometer based on the microlens array

    NASA Astrophysics Data System (ADS)

    Yao, Yu-jia; Xu, Feng; Xia, Yin-xiang

    2017-10-01

    A new light field spectrometry microscope imaging system, which was composed by microscope objective, microlens array and spectrometry system was designed in this paper. 5-D information (4-D light field and 1-D spectrometer) of the sample could be captured by the snapshot system in only one exposure, avoiding the motion blur and aberration caused by the scanning imaging process of the traditional imaging spectrometry. Microscope objective had been used as the former group while microlens array used as the posterior group. The optical design of the system was simulated by Zemax, the parameter matching condition between microscope objective and microlens array was discussed significantly during the simulation process. The result simulated in the image plane was analyzed and discussed.

  1. Optical Fourier diffractometry applied to degraded bone structure recognition

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Godwod, Krzysztof; Szawdyn, Jacek; Sawicki, Andrzej

    1993-09-01

    Image processing and recognition methods are useful in many fields. This paper presents the hybrid optical and digital method applied to recognition of pathological changes in bones involved by metabolic bone diseases. The trabecular bone structure, registered by x ray on the photographic film, is analyzed in the new type of computer controlled diffractometer. The set of image parameters, extracted from diffractogram, is evaluated by statistical analysis. The synthetic image descriptors in discriminant space, constructed on the base of 3 training groups of images (control, osteoporosis, and osteomalacia groups) by discriminant analysis, allow us to recognize bone samples with degraded bone structure and to recognize the disease. About 89% of the images were classified correctly. This method after optimization process will be verified in medical investigations.

  2. Quantitative analysis of phosphoinositide 3-kinase (PI3K) signaling using live-cell total internal reflection fluorescence (TIRF) microscopy.

    PubMed

    Johnson, Heath E; Haugh, Jason M

    2013-12-02

    This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.

  3. Post-image acquisition processing approaches for coherent backscatter validation

    NASA Astrophysics Data System (ADS)

    Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.

    2014-10-01

    Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.

  4. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  5. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  6. Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range

    NASA Astrophysics Data System (ADS)

    Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery

    2007-06-01

    Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.

  7. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  8. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    PubMed

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.

  9. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  10. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    PubMed

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  11. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  12. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Technical Reports Server (NTRS)

    Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.

    2016-01-01

    Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.

  13. Crystal surface analysis using matrix textural features classified by a probabilistic neural network

    NASA Astrophysics Data System (ADS)

    Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk

    1991-12-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.

  14. Optimization and evaluation of metal injection molding by using X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Shidi; Zhang, Ruijie; Qu, Xuanhui, E-mail: quxh@ustb.edu.cn

    2015-06-15

    6061 aluminum alloy and 316L stainless steel green bodies were obtained by using different injection parameters (injection pressure, speed and temperature). After injection process, the green bodies were scanned by X-ray tomography. The projection and reconstruction images show the different kinds of defects obtained by the improper injection parameters. Then, 3D rendering of the Al alloy green bodies was used to demonstrate the spatial morphology characteristics of the serious defects. Based on the scanned and calculated results, it is convenient to obtain the proper injection parameters for the Al alloy. Then, reasons of the defect formation were discussed. During moldmore » filling, the serious defects mainly formed in the case of low injection temperature and high injection speed. According to the gray value distribution of projection image, a threshold gray value was obtained to evaluate whether the quality of green body can meet the desired standard. The proper injection parameters of 316L stainless steel can be obtained efficiently by using the method of analyzing the Al alloy injection. - Highlights: • Different types of defects in green bodies were scanned by using X-ray tomography. • Reasons of the defect formation were discussed. • Optimization of the injection parameters can be simplified greatly by the way of X-ray tomography. • Evaluation standard of the injection process can be obtained by using the gray value distribution of projection image.« less

  15. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  16. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  17. Excitation-resolved multispectral method for imaging pharmacokinetic parameters in dynamic fluorescent molecular tomography

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen

    2017-04-01

    Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.

  18. Hemodynamic changes in a rat parietal cortex after endothelin-1-induced middle cerebral artery occlusion monitored by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ma, Yushu; Dou, Shidan; Wang, Yi; La, Dongsheng; Liu, Jianghong; Ma, Zhenhe

    2016-07-01

    A blockage of the middle cerebral artery (MCA) on the cortical branch will seriously affect the blood supply of the cerebral cortex. Real-time monitoring of MCA hemodynamic parameters is critical for therapy and rehabilitation. Optical coherence tomography (OCT) is a powerful imaging modality that can produce not only structural images but also functional information on the tissue. We use OCT to detect hemodynamic changes after MCA branch occlusion. We injected a selected dose of endothelin-1 (ET-1) at a depth of 1 mm near the MCA and let the blood vessels follow a process first of occlusion and then of slow reperfusion as realistically as possible to simulate local cerebral ischemia. During this period, we used optical microangiography and Doppler OCT to obtain multiple hemodynamic MCA parameters. The change trend of these parameters from before to after ET-1 injection clearly reflects the dynamic regularity of the MCA. These results show the mechanism of the cerebral ischemia-reperfusion process after a transient middle cerebral artery occlusion and confirm that OCT can be used to monitor hemodynamic parameters.

  19. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  20. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  1. a Geographic Data Gathering System for Image Geolocalization Refining

    NASA Astrophysics Data System (ADS)

    Semaan, B.; Servières, M.; Moreau, G.; Chebaro, B.

    2017-09-01

    Image geolocalization has become an important research field during the last decade. This field is divided into two main sections. The first is image geolocalization that is used to find out which country, region or city the image belongs to. The second one is refining image localization for uses that require more accuracy such as augmented reality and three dimensional environment reconstruction using images. In this paper we present a processing chain that gathers geographic data from several sources in order to deliver a better geolocalization than the GPS one of an image and precise camera pose parameters. In order to do so, we use multiple types of data. Among this information some are visible in the image and are extracted using image processing, other types of data can be extracted from image file headers or online image sharing platforms related information. Extracted information elements will not be expressive enough if they remain disconnected. We show that grouping these information elements helps finding the best geolocalization of the image.

  2. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  3. Identification and restoration in 3D fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dieterlen, Alain; Xu, Chengqi; Haeberle, Olivier; Hueber, Nicolas; Malfara, R.; Colicchio, B.; Jacquey, Serge

    2004-06-01

    3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an aquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3-D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.

  4. Imaging Study of Multi-Crystalline Silicon Wafers Throughout the Manufacturing Process: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Zaunbracher, K.

    2011-07-01

    Imaging techniques are applied to multi-crystalline silicon bricks, wafers at various process steps, and finished solar cells. Photoluminescence (PL) imaging is used to characterize defects and material quality on bricks and wafers. Defect regions within the wafers are influenced by brick position within an ingot and height within the brick. The defect areas in as-cut wafers are compared to imaging results from reverse-bias electroluminescence and dark lock-in thermography and cell parameters of near-neighbor finished cells. Defect areas are also characterized by defect band emissions. The defect areas measured by these techniques on as-cut wafers are shown to correlate to finishedmore » cell performance.« less

  5. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    NASA Astrophysics Data System (ADS)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  6. Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Li, R.; Zhang, T.; Geng, R.; Wang, L.

    2018-04-01

    In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.

  7. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  8. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    NASA Astrophysics Data System (ADS)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  9. Investigation of antenna pattern constraints for passive geosynchronous microwave imaging radiometers

    NASA Technical Reports Server (NTRS)

    Gasiewski, A. J.; Skofronick, G. M.

    1992-01-01

    Progress by investigators at Georgia Tech in defining the requirements for large space antennas for passive microwave Earth imaging systems is reviewed. In order to determine antenna constraints (e.g., the aperture size, illumination taper, and gain uncertainty limits) necessary for the retrieval of geophysical parameters (e.g., rain rate) with adequate spatial resolution and accuracy, a numerical simulation of the passive microwave observation and retrieval process is being developed. Due to the small spatial scale of precipitation and the nonlinear relationships between precipitation parameters (e.g., rain rate, water density profile) and observed brightness temperatures, the retrieval of precipitation parameters are of primary interest in the simulation studies. Major components of the simulation are described as well as progress and plans for completion. The overall goal of providing quantitative assessments of the accuracy of candidate geosynchronous and low-Earth orbiting imaging systems will continue under a separate grant.

  10. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  11. Application of Oversampling to obtain the MTF of Digital Radiology Equipment.

    NASA Astrophysics Data System (ADS)

    Narváez, M.; Graffigna, J. P.; Gómez, M. E.; Romo, R.

    2016-04-01

    Within the objectives of theproject Medical Image Processing for QualityAssessment ofX Ray Imaging, the present research work is aimed at developinga phantomX ray image and itsassociated processing algorithms in order to evaluatethe image quality rendered by digital X ray equipment. These tools are used to measure various image parameters, among which spatial resolution shows afundamental property that can be characterized by the Modulation Transfer Function (MTF)of an imaging system [1]. After performing a thorough literature surveyon imaging quality control in digital X film in Argentine and international publications, it was decided to adopt for this work the Norm IEC 62220 1:2003 that recommends using an image edge as a testingmethod. In order to obtain the characterizing MTF, a protocol was designedfor unifying the conditions under which the images are acquired for later evaluation. The protocol implied acquiring a radiography image by means of a specific referential technique, i.e. referred either to voltage, current, time, distance focus plate (/film?) distance, or other referential parameter, and to interpret the image through a system of computed radiology or direct digital radiology. The contribution of the work stems from the fact that, even though the traditional way of evaluating an X film image quality has relied mostly on subjective methods, this work presents an objective evaluative toolfor the images obtained with a givenequipment, followed by a contrastive analysis with the renderings from other X filmimaging sets.Once the images were obtained, specific calculations were carried out. Though there exist some methods based on the subjective evaluation of the quality of image, this work offers an objective evaluation of the equipment under study. Finally, we present the results obtained on different equipment.

  12. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.

  13. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  14. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE PAGES

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  15. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-01

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  16. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  17. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  18. Small-Animal Imaging Using Diffuse Fluorescence Tomography.

    PubMed

    Davis, Scott C; Tichauer, Kenneth M

    2016-01-01

    Diffuse fluorescence tomography (DFT) has been developed to image the spatial distribution of fluorescence-tagged tracers in living tissue. This capability facilitates the recovery of any number of functional parameters, including enzymatic activity, receptor density, blood flow, and gene expression. However, deploying DFT effectively is complex and often requires years of know-how, especially for newer mutlimodal systems that combine DFT with conventional imaging systems. In this chapter, we step through the process of using MRI-DFT imaging of a receptor-targeted tracer in small animals.

  19. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castillo, S; Castillo, R; Castillo, E

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less

  20. Coregistration refinement of hyperspectral images and DSM: An object-based approach using spectral information

    NASA Astrophysics Data System (ADS)

    Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe

    2015-02-01

    For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.

  1. A virtual clinical trial comparing static versus dynamic PET imaging in measuring response to breast cancer therapy

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Muzi, Mark; Peterson, Lanell M.; Linden, Hannah M.; Novakova, Alena; Mankoff, David A.; E Kinahan, Paul

    2017-05-01

    We developed a method to evaluate variations in the PET imaging process in order to characterize the relative ability of static and dynamic metrics to measure breast cancer response to therapy in a clinical trial setting. We performed a virtual clinical trial by generating 540 independent and identically distributed PET imaging study realizations for each of 22 original dynamic fluorodeoxyglucose (18F-FDG) breast cancer patient studies pre- and post-therapy. Each noise realization accounted for known sources of uncertainty in the imaging process, such as biological variability and SUV uptake time. Four definitions of SUV were analyzed, which were SUVmax, SUVmean, SUVpeak, and SUV50%. We performed a ROC analysis on the resulting SUV and kinetic parameter uncertainty distributions to assess the impact of the variability on the measurement capabilities of each metric. The kinetic macro parameter, K i , showed more variability than SUV (mean CV K i   =  17%, SUV  =  13%), but K i pre- and post-therapy distributions also showed increased separation compared to the SUV pre- and post-therapy distributions (mean normalized difference K i   =  0.54, SUV  =  0.27). For the patients who did not show perfect separation between the pre- and post-therapy parameter uncertainty distributions (ROC AUC  <  1), dynamic imaging outperformed SUV in distinguishing metabolic change in response to therapy, ranging from 12 to 14 of 16 patients over all SUV definitions and uptake time scenarios (p  <  0.05). For the patient cohort in this study, which is comprised of non-high-grade ER+  tumors, K i outperformed SUV in an ROC analysis of the parameter uncertainty distributions pre- and post-therapy. This methodology can be applied to different scenarios with the ability to inform the design of clinical trials using PET imaging.

  2. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  3. Image search engine with selective filtering and feature-element-based classification

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  4. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    NASA Astrophysics Data System (ADS)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  5. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  6. A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  7. Flame analysis using image processing techniques

    NASA Astrophysics Data System (ADS)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  8. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  9. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  10. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  11. Evaluation of width and width uniformity of near-field electrospinning printed micro and sub-micrometer lines based on optical image processing

    NASA Astrophysics Data System (ADS)

    Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde

    2018-03-01

    This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.

  12. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  13. Image design and replication for image-plane disk-type multiplex holograms

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Hung; Cheng, Yih-Shyang

    2017-09-01

    The fabrication methods and parameter design for both real-image generation and virtual-image display in image-plane disk-type multiplex holography are introduced in this paper. A theoretical model of a disk-type hologram is also presented and is then used in our two-step holographic processes, including the production of a non-image-plane master hologram and optical replication using a single-beam copying system for the production of duplicated holograms. Experimental results are also presented to verify the possibility of mass production using the one-shot holographic display technology described in this study.

  14. High-contrast imaging in the cloud with klipReduce and Findr

    NASA Astrophysics Data System (ADS)

    Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.

    2016-08-01

    Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.

  15. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  16. Effect of Voltage and Flow Rate Electrospinning Parameters on Polyacrylonitrile Electrospun Fibers

    NASA Astrophysics Data System (ADS)

    Bakar, S. S. S.; Fong, K. C.; Eleyas, A.; Nazeri, M. F. M.

    2018-03-01

    Currently, electrospinning is a very famous technique and widely used for forming polymer nanofibers. In this paper, the Polyacrylonitrile (PAN) nanofibers were prepared in concentration of 10wt% with varied processing parameters that can affect the properties of PAN fiber in term of fiber diameter and electrical conductivity was presented. Voltage of 10, 15 and 20 kV with PAN flow rate of 1 electrospun PAN fibers were then undergo pyrolysis at 800°C for 30 minutes. The resultant PAN nanofibers were then analysed by SEM, XRD and four point probe test after pyrolysis process. SEM image show continuos uniform and smooth surface fibrous structure of electrospun PAN fibers with average diameter of 1.81 μm. The fiber morphology is controlled by manipulating the processing parameters of electrospinning process. The results showed that the resistance of electrospun PAN fibers decreases as the processing parameter changes by increasing the applied voltage and flow rate of electrospinning.

  17. A Statistical Representation of Pyrotechnic Igniter Output

    NASA Astrophysics Data System (ADS)

    Guo, Shuyue; Cooper, Marcia

    2017-06-01

    The output of simplified pyrotechnic igniters for research investigations is statistically characterized by monitoring the post-ignition external flow field with Schlieren imaging. Unique to this work is a detailed quantification of all measurable manufacturing parameters (e.g., bridgewire length, charge cavity dimensions, powder bed density) and associated shock-motion variability in the tested igniters. To demonstrate experimental precision of the recorded Schlieren images and developed image processing methodologies, commercial exploding bridgewires using wires of different parameters were tested. Finally, a statistically-significant population of manufactured igniters were tested within the Schlieren arrangement resulting in a characterization of the nominal output. Comparisons between the variances measured throughout the manufacturing processes and the calculated output variance provide insight into the critical device phenomena that dominate performance. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.

  18. [Assessment of skin aging grading based on computer vision].

    PubMed

    Li, Lingyu; Xue, Jinxia; He, Xiangqian; Zhang, Sheng; Fan, Chu

    2017-06-01

    Skin aging is the most intuitive and obvious sign of the human aging processes. Qualitative and quantitative determination of skin aging is of particular importance for the evaluation of human aging and anti-aging treatment effects. To solve the problem of subjectivity of conventional skin aging grading methods, the self-organizing map (SOM) network was used to explore an automatic method for skin aging grading. First, the ventral forearm skin images were obtained by a portable digital microscope and two texture parameters, i.e. , mean width of skin furrows and the number of intersections were extracted by image processing algorithm. Then, the values of texture parameters were taken as inputs of SOM network to train the network. The experimental results showed that the network achieved an overall accuracy of 80.8%, compared with the aging grading results by human graders. The designed method appeared to be rapid and objective, which can be used for quantitative analysis of skin images, and automatic assessment of skin aging grading.

  19. Hazard mitigation with cloud model based rainfall and convective data

    NASA Astrophysics Data System (ADS)

    Gernowo, R.; Adi, K.; Yulianto, T.; Seniyatis, S.; Yatunnisa, A. A.

    2018-05-01

    Heavy rain in Semarang 15 January 2013 causes flood. It is related to dynamic of weather’s parameter, especially with convection process, clouds and rainfall data. In this case, weather condition analysis uses Weather Research and Forecasting (WRF) model used to analyze. Some weather’s parameters show significant result. Their fluctuations prove there is a strong convection that produces convective cloud (Cumulonimbus). Nesting and 2 domains on WRF model show good output to represent weather’s condition commonly. The results of this study different between output cloud cover rate of observation result and output of model around 6-12 hours is because spinning-up of processing. Satellite Images of MTSAT (Multifunctional Transport Satellite) are used as a verification data to prove the result of WRF. White color of satellite image is Coldest Dark Grey (CDG) that indicates there is cloud’s top. This image consolidates that the output of WRF is good enough to analyze Semarang’s condition when the case happened.

  20. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van

    2013-01-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521

  2. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  3. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  4. Image analysis of multiple moving wood pieces in real time

    NASA Astrophysics Data System (ADS)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  5. Image processing analysis of nuclear track parameters for CR-39 detector irradiated by thermal neutron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Jobouri, Hussain A., E-mail: hahmed54@gmail.com; Rajab, Mustafa Y., E-mail: mostafaheete@gmail.com

    CR-39 detector which covered with boric acid (H{sub 3}Bo{sub 3}) pellet was irradiated by thermal neutrons from ({sup 241}Am - {sup 9}Be) source with activity 12Ci and neutron flux 10{sup 5} n. cm{sup −2}. s{sup −1}. The irradiation times -T{sub D} for detector were 4h, 8h, 16h and 24h. Chemical etching solution for detector was sodium hydroxide NaOH, 6.25N with 45 min etching time and 60 C° temperature. Images of CR-39 detector after chemical etching were taken from digital camera which connected from optical microscope. MATLAB software version 7.0 was used to image processing. The outputs of image processing of MATLABmore » software were analyzed and found the following relationships: (a) The irradiation time -T{sub D} has behavior linear relationships with following nuclear track parameters: i) total track number - N{sub T} ii) maximum track number - MRD (relative to track diameter - D{sub T}) at response region range 2.5 µm to 4 µm iii) maximum track number - M{sub D} (without depending on track diameter - D{sub T}). (b) The irradiation time -T{sub D} has behavior logarithmic relationship with maximum track number - M{sub A} (without depending on track area - A{sub T}). The image processing technique principally track diameter - D{sub T} can be take into account to classification of α-particle emitters, In addition to the contribution of these technique in preparation of nano- filters and nano-membrane in nanotechnology fields.« less

  6. Spectral Analysis and Experimental Modeling of Ice Accretion Roughness

    NASA Technical Reports Server (NTRS)

    Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.

    1996-01-01

    A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.

  7. Adaptive striping watershed segmentation method for processing microscopic images of overlapping irregular-shaped and multicentre particles.

    PubMed

    Xiao, X; Bai, B; Xu, N; Wu, K

    2015-04-01

    Oversegmentation is a major drawback of the morphological watershed algorithm. Here, we study and reveal that the oversegmentation is not only because of the irregular shapes of the particle images, which people are familiar with, but also because of some particles, such as ellipses, with more than one centre. A new parameter, the striping level, is introduced and the criterion for striping parameter is built to help find the right markers prior to segmentation. An adaptive striping watershed algorithm is established by applying a procedure, called the marker searching algorithm, to find the markers, which can effectively suppress the oversegmentation. The effectiveness of the proposed method is validated by analysing some typical particle images including the images of gold nanorod ensembles. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  8. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    PubMed

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  9. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  10. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  11. Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking

    PubMed Central

    Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki

    2005-01-01

    Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690

  12. Linear Space-Variant Image Restoration of Photon-Limited Images

    DTIC Science & Technology

    1978-03-01

    levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear

  13. Ultrasonic Shear Wave Elasticity Imaging (SWEI) Sequencing and Data Processing Using a Verasonics Research Scanner

    PubMed Central

    Deng, Yufeng; Rouze, Ned C.; Palmeri, Mark L.; Nightingale, Kathryn R.

    2017-01-01

    Ultrasound elasticity imaging has been developed over the last decade to estimate tissue stiffness. Shear wave elasticity imaging (SWEI) quantifies tissue stiffness by measuring the speed of propagating shear waves following acoustic radiation force excitation. This work presents the sequencing and data processing protocols of SWEI using a Verasonics system. The selection of the sequence parameters in a Verasonics programming script is discussed in detail. The data processing pipeline to calculate group shear wave speed (SWS), including tissue motion estimation, data filtering, and SWS estimation is demonstrated. In addition, the procedures for calibration of beam position, scanner timing, and transducer face heating are provided to avoid SWS measurement bias and transducer damage. PMID:28092508

  14. PAT: From Western solid dosage forms to Chinese materia medica preparations using NIR-CI.

    PubMed

    Zhou, Luwei; Xu, Manfei; Wu, Zhisheng; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    Near-infrared chemical imaging (NIR-CI) is an emerging technology that combines traditional near-infrared spectroscopy with chemical imaging. Therefore, NIR-CI can extract spectral information from pharmaceutical products and simultaneously visualize the spatial distribution of chemical components. The rapid and non-destructive features of NIR-CI make it an attractive process analytical technology (PAT) for identifying and monitoring critical control parameters during the pharmaceutical manufacturing process. This review mainly focuses on the pharmaceutical applications of NIR-CI in each unit operation during the manufacturing processes, from the Western solid dosage forms to the Chinese materia medica preparations. Finally, future applications of chemical imaging in the pharmaceutical industry are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.

    PubMed

    Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas

    2016-12-23

    Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.

  16. High resolution magnetic resonance imaging of the calcaneus: age-related changes in trabecular structure and comparison with dual X-ray absorptiometry measurements

    NASA Technical Reports Server (NTRS)

    Ouyang, X.; Selby, K.; Lang, P.; Engelke, K.; Klifa, C.; Fan, B.; Zucconi, F.; Hottya, G.; Chen, M.; Majumdar, S.; hide

    1997-01-01

    A high-resolution magnetic resonance imaging (MRI) protocol, together with specialized image processing techniques, was applied to the quantitative measurement of age-related changes in calcaneal trabecular structure. The reproducibility of the technique was assessed and the annual rates of change for several trabecular structure parameters were measured. The MR-derived trabecular parameters were compared with calcaneal bone mineral density (BMD), measured by dual X-ray absorptiometry (DXA) in the same subjects. Sagittal MR images were acquired at 1.5 T in 23 healthy women (mean age: 49.3 +/- 16.6 [SD]), using a three-dimensional gradient echo sequence. Image analysis procedures included internal gray-scale calibration, bone and marrow segmentation, and run-length methods. Three trabecular structure parameters, apparent bone volume (ABV/TV), intercept thickness (I.Th), and intercept separation (I.Sp) were calculated from the MR images. The short- and long-term precision errors (mean %CV) of these measured parameters were in the ranges 1-2% and 3-6%, respectively. Linear regression of the trabecular structure parameters vs. age showed significant correlation: ABV/TV (r2 = 33.7%, P < 0.0037), I.Th (r2 = 26.6%, P < 0.0118), I.Sp (r2 = 28.9%, P < 0.0081). These trends with age were also expressed as annual rates of change: ABV/TV (-0.52%/year), I.Th (-0.33%/year), and I.Sp (0.59%/year). Linear regression analysis also showed significant correlation between the MR-derived trabecular structure parameters and calcaneal BMD values. Although a larger group of subjects is needed to better define the age-related changes in trabecular structure parameters and their relation to BMD, these preliminary results demonstrate that high-resolution MRI may potentially be useful for the quantitative assessment of trabecular structure.

  17. CR softcopy display presets based on optimum visualization of specific findings

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.

    1999-07-01

    The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  18. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  19. Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.

    PubMed

    Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing

    2009-06-01

    Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.

  20. Basic research and data analysis for the earth and ocean physics applications program and for the National Geodetic Satellite program

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Data acquisition using single image and seven image data processing is used to provide a precise and accurate geometric description of the earth's surface. Transformation parameters and network distortions are determined, Sea slope along the continental boundaries of the U.S. and earth rotation are examined, along with close grid geodynamic satellite system. Data are derived for a mathematical description of the earth's gravitational field; time variations are determined for geometry of the ocean surface, the solid earth, gravity field, and other geophysical parameters.

  1. In-situ quality monitoring during laser brazing

    NASA Astrophysics Data System (ADS)

    Ungers, Michael; Fecker, Daniel; Frank, Sascha; Donst, Dmitri; Märgner, Volker; Abels, Peter; Kaierle, Stefan

    Laser brazing of zinc coated steel is a widely established manufacturing process in the automotive sector, where high quality requirements must be fulfilled. The strength, impermeablitiy and surface appearance of the joint are particularly important for judging its quality. The development of an on-line quality control system is highly desired by the industry. This paper presents recent works on the development of such a system, which consists of two cameras operating in different spectral ranges. For the evaluation of the system, seam imperfections are created artificially during experiments. Finally image processing algorithms for monitoring process parameters based the captured images are presented.

  2. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  3. 3D Human cartilage surface characterization by optical coherence tomography.

    PubMed

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-07

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman's rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D surface profile parameters investigated were capable of reliably differentiating healthy from early-degenerative cartilage, while scan area sizes considerably affected parameter values. In conclusion, cartilage surface integrity may be adequately assessed by 3D surface profile parameters, which should be used in combination for the comprehensive and thorough evaluation and overall improved diagnostic performance. OCT- and image-based surface assessment could become a valuable adjunct tool to standard arthroscopy.

  4. 3D Human cartilage surface characterization by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D surface profile parameters investigated were capable of reliably differentiating healthy from early-degenerative cartilage, while scan area sizes considerably affected parameter values. In conclusion, cartilage surface integrity may be adequately assessed by 3D surface profile parameters, which should be used in combination for the comprehensive and thorough evaluation and overall improved diagnostic performance. OCT- and image-based surface assessment could become a valuable adjunct tool to standard arthroscopy.

  5. A method to calibrate channel friction and bathymetry parameters of a Sub-Grid hydraulic model using SAR flood images

    NASA Astrophysics Data System (ADS)

    Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.

    2015-12-01

    Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.

  6. METEOSAT studies of clouds and radiation budget

    NASA Technical Reports Server (NTRS)

    Saunders, R. W.

    1982-01-01

    Radiation budget studies of the atmosphere/surface system from Meteosat, cloud parameter determination from space, and sea surface temperature measurements from TIROS N data are all described. This work was carried out on the interactive planetary image processing system (IPIPS), which allows interactive manipulationion of the image data in addition to the conventional computational tasks. The current hardware configuration of IPIPS is shown. The I(2)S is the principal interactive display allowing interaction via a trackball, four buttons under program control, or a touch tablet. Simple image processing operations such as contrast enhancing, pseudocoloring, histogram equalization, and multispectral combinations, can all be executed at the push of a button.

  7. Fisher information theory for parameter estimation in single molecule microscopy: tutorial

    PubMed Central

    Chao, Jerry; Ward, E. Sally; Ober, Raimund J.

    2016-01-01

    Estimation of a parameter of interest from image data represents a task that is commonly carried out in single molecule microscopy data analysis. The determination of the positional coordinates of a molecule from its image, for example, forms the basis of standard applications such as single molecule tracking and localization-based superresolution image reconstruction. Assuming that the estimator used recovers, on average, the true value of the parameter, its accuracy, or standard deviation, is then at best equal to the square root of the Cramér-Rao lower bound. The Cramér-Rao lower bound can therefore be used as a benchmark in the evaluation of the accuracy of an estimator. Additionally, as its value can be computed and assessed for different experimental settings, it is useful as an experimental design tool. This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramér-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy. The material includes a presentation of the photon detection process that underlies all image data, various image data models that describe images acquired with different detector types, and Fisher information expressions that are necessary for the calculation of the lower bound. Throughout the tutorial, examples involving concrete estimation problems are used to illustrate the effects of various factors on the accuracy of parameter estimation, and more generally, to demonstrate the flexibility of the mathematical framework. PMID:27409706

  8. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    PubMed

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A chaotic cryptosystem for images based on Henon and Arnold cat map.

    PubMed

    Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.

  10. Functional Imaging Biomarkers: Potential to Guide an Individualised Approach to Radiotherapy.

    PubMed

    Prestwich, R J D; Vaidyanathan, S; Scarsbrook, A F

    2015-10-01

    The identification of robust prognostic and predictive biomarkers would transform the ability to implement an individualised approach to radiotherapy. In this regard, there has been a surge of interest in the use of functional imaging to assess key underlying biological processes within tumours and their response to therapy. Importantly, functional imaging biomarkers hold the potential to evaluate tumour heterogeneity/biology both spatially and temporally. An ever-increasing range of functional imaging techniques is now available primarily involving positron emission tomography and magnetic resonance imaging. Small-scale studies across multiple tumour types have consistently been able to correlate changes in functional imaging parameters during radiotherapy with disease outcomes. Considerable challenges remain before the implementation of functional imaging biomarkers into routine clinical practice, including the inherent temporal variability of biological processes within tumours, reproducibility of imaging, determination of optimal imaging technique/combinations, timing during treatment and design of appropriate validation studies. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  11. A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map

    PubMed Central

    Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724

  12. Formation of the image on the receiver of thermal radiation

    NASA Astrophysics Data System (ADS)

    Akimenko, Tatiana A.

    2018-04-01

    The formation of the thermal picture of the observed scene with the verification of the quality of the thermal images obtained is one of the important stages of the technological process that determine the quality of the thermal imaging observation system. In this article propose to consider a model for the formation of a thermal picture of a scene, which must take into account: the features of the object of observation as the source of the signal; signal transmission through the physical elements of the thermal imaging system that produce signal processing at the optical, photoelectronic and electronic stages, which determines the final parameters of the signal and its compliance with the requirements for thermal information and measurement systems.

  13. Some Experience Using SEN2COR

    NASA Astrophysics Data System (ADS)

    Pflug, Bringfried; Bieniarz, Jakub; Debaecker, Vincent; Louis, Jérôme; Müller-Wilms, Uwe

    2016-04-01

    ESA has developed and launched the Sentinel-2A optical imaging mission that delivers optical data products designed to feed downstream services mainly related to land monitoring, emergency management and security. Many of these applications require accurate correction of satellite images for atmospheric effects to ensure the highest quality of scientific exploitation of Sentinel-2 data. Therefore the atmospheric correction processor Sen2Cor was developed by TPZ V on behalf of ESA. TPZ F and DLR have teamed up in order to provide the calibration and validation of the Level-2A processor Sen2Cor. Level-2A processing is applied to Top-Of-Atmosphere (TOA) Level-1C ortho-image reflectance products. Level-2A main output is the Bottom-Of-Atmosphere (BOA) corrected reflectance product. Additional outputs are an Aerosol Optical Thickness (AOT) map, a Water Vapour (WV) map and a Scene Classification (SC) map with Quality Indicators for cloud and snow probabilities. The poster will present some processing examples of Sen2Cor applied to Sentinel-2A data together with first performance investigations. Different situations will be covered like processing with and without DEM (Digital Elevation Model). Sen2Cor processing is controlled by several configuration parameters. Some examples will be presented demonstrating the influence of different settings of some parameters.

  14. A theory of fine structure image models with an application to detection and classification of dementia

    PubMed Central

    Penn, Richard; Werner, Michael; Thomas, Justin

    2015-01-01

    Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible. PMID:26029638

  15. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  16. SU-E-J-42: Motion Adaptive Image Filter for Low Dose X-Ray Fluoroscopy in the Real-Time Tumor-Tracking Radiotherapy System.

    PubMed

    Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H

    2012-06-01

    In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.

  17. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  18. Evaluation of sequential images for photogrammetrically point determination

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2011-12-01

    Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.

  19. Photofragment image analysis using the Onion-Peeling Algorithm

    NASA Astrophysics Data System (ADS)

    Manzhos, Sergei; Loock, Hans-Peter

    2003-07-01

    With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.

  20. Chemometrics.

    ERIC Educational Resources Information Center

    Delaney, Michael F.

    1984-01-01

    This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…

  1. Single-chip microcomputer for image processing in the photonic measuring system

    NASA Astrophysics Data System (ADS)

    Smoleva, Olga S.; Ljul, Natalia Y.

    2002-04-01

    The non-contact measuring system has been designed for rail- track parameters control on the Moscow Metro. It detects some significant parameters: rail-track width, rail-track height, gage, rail-slums, crosslevel, pickets, and car speed. The system consists of three subsystems: non-contact system of rail-track width, height, and gage inspection, non-contact system of rail-slums inspection and subsystem for crosslevel, speed, and pickets detection. Data from subsystems is transferred to pre-processing unit. In order to process data received from subsystems, the single-chip signal processor ADSP-2185 must be used due to providing required processing speed. After data will be processed, it is send to PC, which processes it and outputs it in the readable form.

  2. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  3. Automated Texture Analysis and Determination of Fibre Orientation of Heart Tissue: A Morphometric Study.

    PubMed

    Zach, Bernhard; Hofer, Ernst; Asslaber, Martin; Ahammer, Helmut

    2016-01-01

    The human heart has a heterogeneous structure, which is characterized by different cell types and their spatial configurations. The physical structure, especially the fibre orientation and the interstitial fibrosis, determines the electrical excitation and in further consequence the contractility in macroscopic as well as in microscopic areas. Modern image processing methods and parameters could be used to describe the image content and image texture. In most cases the description of the texture is not satisfying because the fibre orientation, detected with common algorithms, is biased by elements such as fibrocytes or endothelial nuclei. The goal of this work is to figure out if cardiac tissue can be analysed and classified on a microscopic level by automated image processing methods with a focus on an accurate detection of the fibre orientation. Quantitative parameters for identification of textures of different complexity or pathological attributes inside the heart were determined. The focus was set on the detection of the fibre orientation, which was calculated on the basis of the cardiomyocytes' nuclei. It turned out that the orientation of these nuclei corresponded with a high precision to the fibre orientation in the image plane. Additionally, these nuclei also indicated very well the inclination of the fibre.

  4. Image processing via level set curvature flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malladi, R.; Sethian, J.A.

    We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.

  5. Machine recognition of navel orange worm damage in x-ray images of pistachio nuts

    NASA Astrophysics Data System (ADS)

    Keagy, Pamela M.; Parvin, Bahram; Schatzki, Thomas F.

    1995-01-01

    Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non- destructive test is currently not available to determine the insect content of pistachio nuts. This paper uses film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.

  6. Aligning HST Images to Gaia: A Faster Mosaicking Workflow

    NASA Astrophysics Data System (ADS)

    Bajaj, V.

    2017-11-01

    We present a fully programmatic workflow for aligning HST images using the high-quality astrometry provided by Gaia Data Release 1. Code provided in a Jupyter Notebook works through this procedure, including parsing the data to determine the query area parameters, querying Gaia for the coordinate catalog, and using the catalog with TweakReg as reference catalog. This workflow greatly simplifies the normally time-consuming process of aligning HST images, especially those taken as part of mosaics.

  7. Strain localization parameters of AlCu4MgSi processed by high-energy electron beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunev, A. G., E-mail: agl@ispms.ru; Nadezhkin, M. V., E-mail: mvn@ispms.ru; National Research Tomsk Polytechnic University, Tomsk, 634050

    2015-10-27

    The influence of the electron beam surface treatment of AlCu4MgSi on the strain localization parameters and on the critical strain value of the Portevin–Le Chatelier effect has been considered. The strain localization parameters were measured using speckle imaging of the specimens subjected to the constant strain rate uniaxial tension at a room temperature. Impact of the surface treatment on the Portevin–Le Chatelier effect has been investigated.

  8. Artifacts in magnetic spirals retrieved by transport of intensity equation (TIE)

    NASA Astrophysics Data System (ADS)

    Cui, J.; Yao, Y.; Shen, X.; Wang, Y. G.; Yu, R. C.

    2018-05-01

    The artifacts in the magnetic structures reconstructed from Lorentz transmission electron microscopy (LTEM) images with TIE method have been analyzed in detail. The processing for the simulated images of Bloch and Neel spirals indicated that the improper parameters in TIE may overestimate the high frequency information and induce some false features in the retrieved images. The specimen tilting will further complicate the analysis of the images because the LTEM image contrast is not the result of the magnetization distribution within the specimen but the integral projection pattern of the magnetic induction filling the entire space including the specimen.

  9. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy

    PubMed Central

    Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves

    2017-01-01

    Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480

  10. Discriminative feature representation: an effective postprocessing solution to low dose CT imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Liu, Jin; Hu, Yining; Yang, Jian; Shi, Luyao; Shu, Huazhong; Gui, Zhiguo; Coatrieux, Gouenou; Luo, Limin

    2017-03-01

    This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach. This research was supported by National Natural Science Foundation under grants (81370040, 81530060), the Fundamental Research Funds for the Central Universities, and the Qing Lan Project in Jiangsu Province.

  11. Secure Display of Space-Exploration Images

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael

    2006-01-01

    Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.

  12. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  13. A neural network ActiveX based integrated image processing environment.

    PubMed

    Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I

    2000-01-01

    The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.

  14. Judging The Effectiveness Of Wool Combing By The Entropy Of The Images Of Wool Slivers

    NASA Astrophysics Data System (ADS)

    Rodrigues, F. Carvalho; Carvalho, Fernando D.; Peixoto, J. Pinto; Silva, M. Santos

    1989-04-01

    In general it can be said that the textile industry endeavours to render a bunch of fibers chaotically distributed in space into an ordered spatial distribution. This fact is independent of the nature the fibers, i.e., the aim of getting into higher order states in the spatial distribution of the fibers dictates different industrial processes depending on whether the fibers are wool, cotton or man made but the all effect is centred on obtaining at every step of any of the processes a more ordered state regarding the spatial distribution of the fibers. Thinking about the textile processes as a method of getting order out of chaos, the concept of entropy appears as the most appropriate judging parameter on the effectiveness of a step in the chain of an industrial process to produce a regular textile. In fact, entropy is the hidden parameter not only for the textile industry but also for the non woven and paper industrial processes. It happens that in these industries the state of order is linked with the spatial distribution of fibers and to obtain an image of a spatial distribution is an easy matter. To compute the image entropy from the grey level distribution requires only the use of the Shannon formula. In this paper to illustrate the usefulness of employing the entropy of an image concept to textiles the evolution of the entropy of wool slivers along the combing process is matched against the state of parallelization of the fibbers along the seven steps as measured by the existing method. The advantages of the entropy method over the previous method based on diffraction is also demonstrated.

  15. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing.

    PubMed

    Sharma, Rakesh

    2010-07-21

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  16. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing

    NASA Astrophysics Data System (ADS)

    Sharma, Rakesh

    2010-07-01

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  17. Studies of soundings and imagings measurements from geostationary satellites

    NASA Technical Reports Server (NTRS)

    Suomi, V. E.

    1973-01-01

    Soundings and imaging measurements from geostationary satellites are presented. The subjects discussed are: (1) meteorological data processing techniques, (2) sun glitter, (3) cloud growth rate study, satellite stability characteristics, and (4) high resolution optics. The use of perturbation technique to obtain the motion of sensors aboard a satellite is described. The most conditions, and measurement errors. Several performance evaluation parameters are proposed.

  18. Infrared thermography quantitative image processing

    NASA Astrophysics Data System (ADS)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  19. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  20. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  1. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  2. Label-free tomographic reconstruction of optically thick structures using GLIM (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Kouzehgarani, Ghazal N.; Ngyuen, Tan H.; Gillette, Martha U.; Popescu, Gabriel

    2017-02-01

    Although the contrast generated in transmitted light microscopy is due to the elastic scattering of light, multiple scattering scrambles the image and reduces overall visibility. To image both thin and thick samples, we turn to gradient light interference microscopy (GLIM) to simultaneously measure morphological parameters such as cell mass, volume, and surfaces as they change through time. Because GLIM combines multiple intensity images corresponding to controlled phase offsets between laterally sheared beams, incoherent contributions from multiple scattering are implicitly cancelled during the phase reconstruction procedure. As the interfering beams traverse near identical paths, they remain comparable in power and interfere with optimal contrast. This key property lets us obtain tomographic parameters from wide field z-scans after simple numerical processing. Here we show our results on reconstructing tomograms of bovine embryos, characterizing the time-lapse growth of HeLa cells in 3D, and preliminary results on imaging much larger specimen such as brain slices.

  3. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  4. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Automatic measurement of images on astrometric plates

    NASA Astrophysics Data System (ADS)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  6. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  7. Evaluation of the effects of the seasonal variation of solar elevation angle and azimuth on the processes of digital filtering and thematic classification of relief units

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1983-01-01

    The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.

  8. The influence of process parameters on porosity formation in hybrid LASER-GMA welding of AA6082 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Ascari, Alessandro; Fortunato, Alessandro; Orazi, Leonardo; Campana, Giampaolo

    2012-07-01

    This paper deals with an experimental campaign carried out on AA6082 8 mm thick plates in order to investigate the role of process parameters on porosity formation in hybrid LASER-GMA welding. Bead on plate weldments were obtained on the above mentioned aluminum alloy considering the variation of the following process parameters: GMAW current (120 and 180 A for short-arc mode, 90 and 130 A for pulsed-arc mode), arc transfer mode (short-arc and pulsed-arc) and mutual distance between arc and LASER sources (0, 3 and 6 mm). Porosities occurring in the fused zone were observed by means of X-ray inspection and measured exploiting an image analysis software. In order to understand the possible correlation between process parameters and porosity formation an analysis of variance statistical approach was exploited. The obtained results pointed out that GMAW current is significant on porosity formation, while the distance between the sources do not affect this aspect.

  9. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  10. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  11. Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images

    PubMed Central

    Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042

  12. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.

  13. Use of scatterometry for resist process control

    NASA Astrophysics Data System (ADS)

    Bishop, Kenneth P.; Milner, Lisa-Michelle; Naqvi, S. Sohail H.; McNeil, John R.; Draper, B. L.

    1992-06-01

    The formation of resist lines having submicron critical dimensions (CDs) is a complex multistep process, requiring precise control of each processing step. Optimization of parameters for each processing step may be accomplished through theoretical modeling techniques and/or the use of send-ahead wafers followed by scanning electron microscope measurements. Once the optimum parameters for any process having been selected, (e.g., time duration and temperature for post-exposure bake process), no in-situ CD measurements are made. In this paper we describe the use of scatterometry to provide this essential metrology capability. It involves focusing a laser beam on a periodic grating and predicting the shape of the grating lines from a measurement of the scattered power in the diffraction orders. The inverse prediction of lineshape from a measurement of the scatter power is based on a vector diffraction analysis used in conjunction with photolithography simulation tools to provide an accurate scatter model for latent image gratings. This diffraction technique has previously been applied to looking at latent image grating formation, as exposure is taking place. We have broadened the scope of the application and consider the problem of determination of optimal focus.

  14. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  15. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  16. Comparison of numerical simulations to experiments for atomization in a jet nebulizer.

    PubMed

    Lelong, Nicolas; Vecellio, Laurent; Sommer de Gélicourt, Yann; Tanguy, Christian; Diot, Patrice; Junqua-Moullet, Alexandra

    2013-01-01

    The development of jet nebulizers for medical purposes is an important challenge of aerosol therapy. The performance of a nebulizer is characterized by its output rate of droplets with a diameter under 5 µm. However the optimization of this parameter through experiments has reached a plateau. The purpose of this study is to design a numerical model simulating the nebulization process and to compare it with experimental data. Such a model could provide a better understanding of the atomization process and the parameters influencing the nebulizer output. A model based on the Updraft nebulizer (Hudson) was designed with ANSYS Workbench. Boundary conditions were set with experimental data then transient 3D calculations were run on a 4 µm mesh with ANSYS Fluent. Two air flow rate (2 L/min and 8 L/min, limits of the operating range) were considered to account for different turbulence regimes. Numerical and experimental results were compared according to phenomenology and droplet size. The behavior of the liquid was compared to images acquired through shadowgraphy with a CCD Camera. Three experimental methods, laser diffractometry, phase Doppler anemometry (PDA) and shadowgraphy were used to characterize the droplet size distributions. Camera images showed similar patterns as numerical results. Droplet sizes obtained numerically are overestimated in relation to PDA and diffractometry, which only consider spherical droplets. However, at both flow rates, size distributions extracted from numerical image processing were similar to distributions obtained from shadowgraphy image processing. The simulation then provides a good understanding and prediction of the phenomena involved in the fragmentation of droplets over 10 µm. The laws of dynamics apply to droplets down to 1 µm, so we can assume the continuity of the distribution and extrapolate the results for droplets between 1 and 10 µm. So, this model could help predicting nebulizer output with defined geometrical and physical parameters.

  17. Comparison of Numerical Simulations to Experiments for Atomization in a Jet Nebulizer

    PubMed Central

    Lelong, Nicolas; Vecellio, Laurent; Sommer de Gélicourt, Yann; Tanguy, Christian; Diot, Patrice; Junqua-Moullet, Alexandra

    2013-01-01

    The development of jet nebulizers for medical purposes is an important challenge of aerosol therapy. The performance of a nebulizer is characterized by its output rate of droplets with a diameter under 5 µm. However the optimization of this parameter through experiments has reached a plateau. The purpose of this study is to design a numerical model simulating the nebulization process and to compare it with experimental data. Such a model could provide a better understanding of the atomization process and the parameters influencing the nebulizer output. A model based on the Updraft nebulizer (Hudson) was designed with ANSYS Workbench. Boundary conditions were set with experimental data then transient 3D calculations were run on a 4 µm mesh with ANSYS Fluent. Two air flow rate (2 L/min and 8 L/min, limits of the operating range) were considered to account for different turbulence regimes. Numerical and experimental results were compared according to phenomenology and droplet size. The behavior of the liquid was compared to images acquired through shadowgraphy with a CCD Camera. Three experimental methods, laser diffractometry, phase Doppler anemometry (PDA) and shadowgraphy were used to characterize the droplet size distributions. Camera images showed similar patterns as numerical results. Droplet sizes obtained numerically are overestimated in relation to PDA and diffractometry, which only consider spherical droplets. However, at both flow rates, size distributions extracted from numerical image processing were similar to distributions obtained from shadowgraphy image processing. The simulation then provides a good understanding and prediction of the phenomena involved in the fragmentation of droplets over 10 µm. The laws of dynamics apply to droplets down to 1 µm, so we can assume the continuity of the distribution and extrapolate the results for droplets between 1 and 10 µm. So, this model could help predicting nebulizer output with defined geometrical and physical parameters. PMID:24244334

  18. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  19. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  20. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  1. A technique for processing of planetary images with heterogeneous characteristics for estimating geodetic parameters of celestial bodies with the example of Ganymede

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.

    2016-09-01

    The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).

  2. Rock surface roughness measurement using CSI technique and analysis of surface characterization by qualitative and quantitative results

    NASA Astrophysics Data System (ADS)

    Mukhtar, Husneni; Montgomery, Paul; Gianto; Susanto, K.

    2016-01-01

    In order to develop image processing that is widely used in geo-processing and analysis, we introduce an alternative technique for the characterization of rock samples. The technique that we have used for characterizing inhomogeneous surfaces is based on Coherence Scanning Interferometry (CSI). An optical probe is first used to scan over the depth of the surface roughness of the sample. Then, to analyse the measured fringe data, we use the Five Sample Adaptive method to obtain quantitative results of the surface shape. To analyse the surface roughness parameters, Hmm and Rq, a new window resizing analysis technique is employed. The results of the morphology and surface roughness analysis show micron and nano-scale information which is characteristic of each rock type and its history. These could be used for mineral identification and studies in rock movement on different surfaces. Image processing is thus used to define the physical parameters of the rock surface.

  3. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  4. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  5. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  6. Mueller coherency matrix method for contrast image in tissue polarimetry

    NASA Astrophysics Data System (ADS)

    Arce-Diego, J. L.; Fanjul-Vélez, F.; Samperio-García, D.; Pereda-Cubián, D.

    2007-07-01

    In this work, we propose the use of the Mueller Coherency matrix of biological tissues in order to increase the information from tissue images and so their contrast. This method involves different Mueller Coherency matrix based parameters, like the eigenvalues analysis, the entropy factor calculation, polarization components crosstalks, linear and circular polarization degrees, hermiticity or the Quaternions analysis in case depolarisation properties of tissue are sufficiently low. All these parameters make information appear clearer and so increase image contrast, so pathologies like cancer could be detected in a sooner stage of development. The election will depend on the concrete pathological process under study. This Mueller Coherency matrix method can be applied to a single tissue point, or it can be combined with a tomographic technique, so as to obtain a 3D representation of polarization contrast parameters in pathological tissues. The application of this analysis to concrete diseases can lead to tissue burn depth estimation or cancer early detection.

  7. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  8. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  9. Process for combining multiple passes of interferometric SAR data

    DOEpatents

    Bickel, Douglas L.; Yocky, David A.; Hensley, Jr., William H.

    2000-11-21

    Interferometric synthetic aperture radar (IFSAR) is a promising technology for a wide variety of military and civilian elevation modeling requirements. IFSAR extends traditional two dimensional SAR processing to three dimensions by utilizing the phase difference between two SAR images taken from different elevation positions to determine an angle of arrival for each pixel in the scene. This angle, together with the two-dimensional location information in the traditional SAR image, can be transformed into geographic coordinates if the position and motion parameters of the antennas are known accurately.

  10. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  11. Reconstruction of pulse noisy images via stochastic resonance

    PubMed Central

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-01-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911

  12. Dispersal of Volcanic Ash on Mars: Ash Grain Shape Analysis

    NASA Astrophysics Data System (ADS)

    Langdalen, Z.; Fagents, S. A.; Fitch, E. P.

    2017-12-01

    Many ash dispersal models use spheres as ash-grain analogs in drag calculations. These simplifications introduce inaccuracies in the treatment of drag coefficients, leading to inaccurate settling velocities and dispersal predictions. Therefore, we are investigating the use of a range of shape parameters, calculated using grain dimensions, to derive a better representation of grain shape and effective grain cross-sectional area. Specifically, our goal is to apply our results to the modeling of ash deposition to investigate the proposed volcanic origin of certain fine-grained deposits on Mars. Therefore, we are documenting the dimensions and shapes of ash grains from terrestrial subplinian to plinian deposits, in eight size divisions from 2 mm to 16 μm, employing a high resolution optical microscope. The optical image capture protocol provides an accurate ash grain outline by taking multiple images at different focus heights prior to combining them into a composite image. Image composite mosaics are then processed through ImageJ, a robust scientific measurement software package, to calculate a range of dimensionless shape parameters. Since ash grains rotate as they fall, drag forces act on a changing cross-sectional area. Therefore, we capture images and calculate shape parameters of each grain positioned in three orthogonal orientations. We find that the difference between maximum and minimum aspect ratios of the three orientations of a given grain best quantifies the degree of elongation of that grain. However, the average aspect ratio calculated for each grain provides a good representation of relative differences among grains. We also find that convexity provides the best representation of surface irregularity. For both shape parameters, natural ash grains display notably different shape parameter values than sphere analogs. Therefore, Mars ash dispersal modeling that incorporates shape parameters will provide more realistic predictions of deposit extents because volcanic ash-grain morphologies differ substantially from simplified geometric shapes.

  13. Total focusing method with correlation processing of antenna array signals

    NASA Astrophysics Data System (ADS)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  14. Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.

    1999-01-01

    This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.

  15. Sharpening Ejecta Patterns: Investigating Spectral Fidelity After Controlled Intensity-Hue-Saturation Image Fusion of LROC Images of Fresh Craters

    NASA Astrophysics Data System (ADS)

    Awumah, A.; Mahanti, P.; Robinson, M. S.

    2017-12-01

    Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.

  16. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  17. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  18. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  19. Analysis of Non Local Image Denoising Methods

    NASA Astrophysics Data System (ADS)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  20. Machine recognition of navel orange worm damage in X-ray images of pistachio nuts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keagy, P.M.; Schatzki, T.F.; Parvin, B.

    Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non-destructive test is currently not available to determine the insect content of pistachio nuts. This paper presents the use of film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.

  1. Variable-temperature Fourier transform near-infrared imaging spectroscopy of the deuterium/hydrogen exchange in liquid D₂O.

    PubMed

    Unger, Miriam; Ozaki, Yukihiro; Siesler, Heinz W

    2014-01-01

    In the present publication, the deuterium/hydrogen (D/H) exchange of liquid D2O exposed to water vapor of the surrounding atmosphere has been studied by variable-temperature Fourier transform near-infrared (FT-NIR) imaging spectroscopy. Apart from the visualization of the exchange process in the time-resolved FT-NIR images, kinetic parameters and the activation energy for this D/H exchange reaction have been derived from the Arrhenius plot of the variable-temperature spectroscopic data.

  2. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  3. Design of biometrics identification system on palm vein using infrared light

    NASA Astrophysics Data System (ADS)

    Syafiq, Muhammad; Nasution, Aulia M. T.

    2016-11-01

    Image obtained by the LED with wavelength 740nm and 810nm showed that the contrast gradient of vein pattern is low and palm pattern still exist. It means that 740nm and 810nm are less suitable for the detection of blood vessels in the palm of the hand. At a wavelength of 940nm, the pattern is clearly visible, and the pattern of the palms is mostly gone. Furthermore, the pre-processing performed using smoothing process which include Gaussian filter and median filter and contrast stretching. Image segmentation is done by getting the ROI area that would be obtained its information. The identification process of image features obtained by using MSE (Mean Suare Error) method ,LBP (Local Binary Pattern). Furthermore, we will use a database consists of 5 different palm vein pattern which will be used for testing the tool in the identification process. All the process above are done using Raspberry Pi device. The Obtained MSE parameter is 0.025 and LBP features score are less than 10-3 for image to be matched.

  4. Overview of CMOS process and design options for image sensor dedicated to space applications

    NASA Astrophysics Data System (ADS)

    Martin-Gonthier, P.; Magnan, P.; Corbiere, F.

    2005-10-01

    With the growth of huge volume markets (mobile phones, digital cameras...) CMOS technologies for image sensor improve significantly. New process flows appear in order to optimize some parameters such as quantum efficiency, dark current, and conversion gain. Space applications can of course benefit from these improvements. To illustrate this evolution, this paper reports results from three technologies that have been evaluated with test vehicles composed of several sub arrays designed with some space applications as target. These three technologies are CMOS standard, improved and sensor optimized process in 0.35μm generation. Measurements are focussed on quantum efficiency, dark current, conversion gain and noise. Other measurements such as Modulation Transfer Function (MTF) and crosstalk are depicted in [1]. A comparison between results has been done and three categories of CMOS process for image sensors have been listed. Radiation tolerance has been also studied for the CMOS improved process in the way of hardening the imager by design. Results at 4, 15, 25 and 50 krad prove a good ionizing dose radiation tolerance applying specific techniques.

  5. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  6. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  7. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  8. cisTEM, user-friendly software for single-particle image processing.

    PubMed

    Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus

    2018-03-07

    We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.

  9. cisTEM, user-friendly software for single-particle image processing

    PubMed Central

    2018-01-01

    We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216

  10. A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

    NASA Astrophysics Data System (ADS)

    Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-05-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  11. A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  12. Cryo-imaging of fluorescently labeled single cells in a mouse

    NASA Astrophysics Data System (ADS)

    Steyer, Grant J.; Roy, Debashish; Salvado, Olivier; Stone, Meredith E.; Wilson, David L.

    2009-02-01

    We developed a cryo-imaging system to provide single-cell detection of fluorescently labeled cells in mouse, with particular applicability to stem cells and metastatic cancer. The Case cryoimaging system consists of a fluorescence microscope, robotic imaging positioner, customized cryostat, PC-based control system, and visualization/analysis software. The system alternates between sectioning (10-40 μm) and imaging, collecting color brightfield and fluorescent blockface image volumes >60GB. In mouse experiments, we imaged quantum-dot labeled stem cells, GFP-labeled cancer and stem cells, and cell-size fluorescent microspheres. To remove subsurface fluorescence, we used a simplified model of light-tissue interaction whereby the next image was scaled, blurred, and subtracted from the current image. We estimated scaling and blurring parameters by minimizing entropy of subtracted images. Tissue specific attenuation parameters were found [uT : heart (267 +/- 47.6 μm), liver (218 +/- 27.1 μm), brain (161 +/- 27.4 μm)] to be within the range of estimates in the literature. "Next image" processing removed subsurface fluorescence equally well across multiple tissues (brain, kidney, liver, adipose tissue, etc.), and analysis of 200 microsphere images in the brain gave 97+/-2% reduction of subsurface fluorescence. Fluorescent signals were determined to arise from single cells based upon geometric and integrated intensity measurements. Next image processing greatly improved axial resolution, enabled high quality 3D volume renderings, and improved enumeration of single cells with connected component analysis by up to 24%. Analysis of image volumes identified metastatic cancer sites, found homing of stem cells to injury sites, and showed microsphere distribution correlated with blood flow patterns. We developed and evaluated cryo-imaging to provide single-cell detection of fluorescently labeled cells in mouse. Our cryo-imaging system provides extreme (>60GB), micron-scale, fluorescence, and bright field image data. Here we describe our image preprocessing, analysis, and visualization techniques. Processing improves axial resolution, reduces subsurface fluorescence by 97%, and enables single cell detection and counting. High quality 3D volume renderings enable us to evaluate cell distribution patterns. Applications include the myriad of biomedical experiments using fluorescent reporter gene and exogenous fluorophore labeling of cells in applications such as stem cell regenerative medicine, cancer, tissue engineering, etc.

  13. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    NASA Astrophysics Data System (ADS)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  14. Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images

    NASA Astrophysics Data System (ADS)

    Rogowska, Jadwiga; Brezinski, Mark E.

    2002-02-01

    Osteoarthritis, whose hallmark is the progressive loss of joint cartilage, is a major cause of morbidity worldwide. Recently, optical coherence tomography (OCT) has demonstrated considerable promise for the assessment of articular cartilage. Among the most important parameters to be assessed is cartilage width. However, detection of the bone cartilage interface is critical for the assessment of cartilage width. At present, the quantitative evaluations of cartilage thickness are being done using manual tracing of cartilage-bone borders. Since data is being obtained near video rate with OCT, automated identification of the bone-cartilage interface is critical. In order to automate the process of boundary detection on OCT images, there is a need for developing new image processing techniques. In this paper we describe the image processing techniques for speckle removal, image enhancement and segmentation of cartilage OCT images. In particular, this paper focuses on rabbit cartilage since this is an important animal model for testing both chondroprotective agents and cartilage repair techniques. In this study, a variety of techniques were examined. Ultimately, by combining an adaptive filtering technique with edge detection (vertical gradient, Sobel edge detection), cartilage edges can be detected. The procedure requires several steps and can be automated. Once the cartilage edges are outlined, the cartilage thickness can be measured.

  15. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  16. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  17. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  18. Liquid Jets in Crossflow at Elevated Temperatures and Pressures

    NASA Astrophysics Data System (ADS)

    Amighi, Amirreza

    An experimental study on the characterization of liquid jets injected into subsonic air crossflows is conducted. The aim of the study is to relate the droplet size and other attributes of the spray, such as breakup length, position, plume width, and time to flow parameters, including jet and air velocities, pressure and temperature as well as non-dimensional variables. Furthermore, multiple expressions are defined that would summarize the general behavior of the spray. For this purpose, an experimental setup is developed, which could withstand high temperatures and pressures to simulate conditions close to those experienced inside gas turbine engines. Images are captured using a laser based shadowgraphy system similar to a 2D PIV system. Image processing is extensively used to measure droplet size and boundaries of the spray. In total 209 different conditions are tested and over 72,000 images are captured and processed. The crossflow air temperatures are 25°C, 200°C, and 300°C; absolute crossflow air pressures are 2.1, 3.8, and 5.2 bars. Various liquid and gas velocities are tested for each given temperature and pressure in order to study the breakup mechanisms and regimes. Effects of dimensional and non-dimensional variables on droplet size are presented in detail. Several correlations for the mean droplet size, which are generated in this process, are presented. In addition, the influence of non-dimensional variables on the breakup length, time, plume area, angle, width and mean jet surface thickness are discussed and individual correlations are provided for each parameter. The influence of each individual parameter on the droplet sizes is discussed for a better understanding of the fragmentation process. Finally, new correlations for the centerline, windward and leeward trajectories are presented and compared to the previously reported correlations.

  19. Design of Experiments to Study the Impact of Process Parameters on Droplet Size and Development of Non-Invasive Imaging Techniques in Tablet Coating

    PubMed Central

    Dennison, Thomas J.; Smith, Julian; Hofmann, Michael P.; Bland, Charlotte E.; Badhan, Raj K.; Al-Khattawi, Ali; Mohammed, Afzal R.

    2016-01-01

    Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats. PMID:27548263

  20. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  1. High speed multiphoton imaging

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming

    2016-12-01

    Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.

  2. Multimodal imaging of ischemic wounds

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  3. Ultra-high-speed variable focus optics for novel applications in advanced imaging

    NASA Astrophysics Data System (ADS)

    Kang, S.; Dotsenko, E.; Amrhein, D.; Theriault, C.; Arnold, C. B.

    2018-02-01

    With the advancement of ultra-fast manufacturing technologies, high speed imaging with high 3D resolution has become increasingly important. Here we show the use of an ultra-high-speed variable focus optical element, the TAG Lens, to enable new ways to acquire 3D information from an object. The TAG Lens uses sound to adjust the index of refraction profile in a liquid and thereby can achieve focal scanning rates greater than 100 kHz. When combined with a high-speed pulsed LED and a high-speed camera, we can exploit this phenomenon to achieve high-resolution imaging through large depths. By combining the image acquisition with digital image processing, we can extract relevant parameters such as tilt and angle information from objects in the image. Due to the high speeds at which images can be collected and processed, we believe this technique can be used as an efficient method of industrial inspection and metrology for high throughput applications.

  4. Super-resolution for scanning light stimulation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de

    Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less

  5. Experiment research on infrared targets signature in mid and long IR spectral bands

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting

    2013-09-01

    Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.

  6. Smart Contrast Agents for Magnetic Resonance Imaging.

    PubMed

    Bonnet, Célia S; Tóth, Éva

    2016-01-01

    By visualizing bioactive molecules or biological parameters in vivo, molecular imaging is searching for information at the molecular level in living organisms. In addition to contributing to earlier and more personalized diagnosis in medicine, it also helps understand and rationalize the molecular factors underlying physiological and pathological processes. In magnetic resonance imaging (MRI), complexes of paramagnetic metal ions, mostly lanthanides, are commonly used to enhance the intrinsic image contrast. They rely either on the relaxation effect of these metal chelates (T(1) agents), or on the phenomenon of paramagnetic chemical exchange saturation transfer (PARACEST agents). In both cases, responsive molecular magnetic resonance imaging probes can be designed to report on various biomarkers of biological interest. In this context, we review recent work in the literature and from our group on responsive T(1) and PARACEST MRI agents for the detection of biogenic metal ions (such as calcium or zinc), enzymatic activities, or neurotransmitter release. These examples illustrate the general strategies that can be applied to create molecular imaging agents with an MRI detectable response to biologically relevant parameters.

  7. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  8. The impact of robustness of deformable image registration on contour propagation and dose accumulation for head and neck adaptive radiotherapy.

    PubMed

    Zhang, Lian; Wang, Zhi; Shi, Chengyu; Long, Tengfei; Xu, X George

    2018-05-30

    Deformable image registration (DIR) is the key process for contour propagation and dose accumulation in adaptive radiation therapy (ART). However, currently, ART suffers from a lack of understanding of "robustness" of the process involving the image contour based on DIR and subsequent dose variations caused by algorithm itself and the presetting parameters. The purpose of this research is to evaluate the DIR caused variations for contour propagation and dose accumulation during ART using the RayStation treatment planning system. Ten head and neck cancer patients were selected for retrospective studies. Contours were performed by a single radiation oncologist and new treatment plans were generated on the weekly CT scans for all patients. For each DIR process, four deformation vector fields (DVFs) were generated to propagate contours and accumulate weekly dose by the following algorithms: (a) ANACONDA with simple presetting parameters, (b) ANACONDA with detailed presetting parameters, (c) MORFEUS with simple presetting parameters, and (d) MORFEUS with detailed presetting parameters. The geometric evaluation considered DICE coefficient and Hausdorff distance. The dosimetric evaluation included D 95 , D max , D mean , D min , and Homogeneity Index. For geometric evaluation, the DICE coefficient variations of the GTV were found to be 0.78 ± 0.11, 0.96 ± 0.02, 0.64 ± 0.15, and 0.91 ± 0.03 for simple ANACONDA, detailed ANACONDA, simple MORFEUS, and detailed MORFEUS, respectively. For dosimetric evaluation, the corresponding Homogeneity Index variations were found to be 0.137 ± 0.115, 0.006 ± 0.032, 0.197 ± 0.096, and 0.006 ± 0.033, respectively. The coherent geometric and dosimetric variations also consisted in large organs and small organs. Overall, the results demonstrated that the contour propagation and dose accumulation in clinical ART were influenced by the DIR algorithm, and to a greater extent by the presetting parameters. A quality assurance procedure should be established for the proper use of a commercial DIR for adaptive radiation therapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Learning Photogrammetry with Interactive Software Tool PhoX

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2016-06-01

    Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.

  10. Multifractal geometry in analysis and processing of digital retinal photographs for early diagnosis of human diabetic macular edema.

    PubMed

    Tălu, Stefan

    2013-07-01

    The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.

  11. Robust low-dose dynamic cerebral perfusion CT image restoration via coupled dictionary learning scheme.

    PubMed

    Tian, Xiumei; Zeng, Dong; Zhang, Shanli; Huang, Jing; Zhang, Hua; He, Ji; Lu, Lijun; Xi, Weiwen; Ma, Jianhua; Bian, Zhaoying

    2016-11-22

    Dynamic cerebral perfusion x-ray computed tomography (PCT) imaging has been advocated to quantitatively and qualitatively assess hemodynamic parameters in the diagnosis of acute stroke or chronic cerebrovascular diseases. However, the associated radiation dose is a significant concern to patients due to its dynamic scan protocol. To address this issue, in this paper we propose an image restoration method by utilizing coupled dictionary learning (CDL) scheme to yield clinically acceptable PCT images with low-dose data acquisition. Specifically, in the present CDL scheme, the 2D background information from the average of the baseline time frames of low-dose unenhanced CT images and the 3D enhancement information from normal-dose sequential cerebral PCT images are exploited to train the dictionary atoms respectively. After getting the two trained dictionaries, we couple them to represent the desired PCT images as spatio-temporal prior in objective function construction. Finally, the low-dose dynamic cerebral PCT images are restored by using a general DL image processing. To get a robust solution, the objective function is solved by using a modified dictionary learning based image restoration algorithm. The experimental results on clinical data show that the present method can yield more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps than the state-of-the-art methods.

  12. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  13. Automatic estimation of elasticity parameters in breast tissue

    NASA Astrophysics Data System (ADS)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  14. TU-FG-209-04: Testing of Digital Image Receptors Using AAPM TG-150’s Draft Recommendations - Investigating the Impact of Different Processing Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, C; Dave, J

    Purpose: To evaluate implementation of AAPM TG-150’s draft recommendations via a parameter study for testing the performance of digital image receptors. Methods: Flat field images were acquired from 9 calibrated digital image receptors associated with 9 new portable digital radiography systems (Carestream Health, Inc.) based on the draft recommendations and manufacturer-specified calibration conditions (set of 4 images at input detector air kerma ranging from 1 to 25 µGy). Effects of exposure response function (linearized and logarithmic), ‘Presentation Intent Type’ (‘For Processing’ and ‘For Presentation’), detector orientation with respect to the anode-cathode axis (4 orientations; 900 rotations per iteration), different ROImore » sizes (5×5–40×40 mm{sup 2}) and elimination of varying dimensions of image border (0 mm i.e., without boundary elimination to 150 mm) on signal, noise, signal-to-noise ratio (SNR) and the associated nonuniformities were evaluated. Images were analyzed in Matlab and quantities were compared using ANOVA. Results: Signal, noise and SNR values averaged over 9 systems with default parameter values in draft recommendations were 4837.2±139.4, 19.7±0.9 and 246.4±10.1 (mean ± standard deviation), respectively (at input detector air kerma: 12.5 µGy). Signal, noise and SNR showed characteristic dependency on exposure response function and on ‘Presentation Intent Type’. These values were not affected by ROI size and detector orientation, but analysis showed that eliminating the edge pixels along the boundary was required for the noise parameter (coefficient of variation range for noise: 72%–106% and 3%–4% without and with boundary elimination; respectively). Local and global nonuniformities showed a similar dependence on the need for boundary elimination. Interestingly, computed non-uniformities showed agreement with manufacturer-reported values except for noise non-uniformities in two units; artifacts were seen in images from these two units highlighting the importance of independent evaluations. Conclusion: The effect of different parameters on performance characterization of digital image receptors was evaluated based on TG-150’s draft recommendations.« less

  15. Pilot Study for OCT Guided Design and Fit of a Prosthetic Device for Treatment of Corneal Disease.

    PubMed

    Le, Hong-Gam T; Tang, Maolong; Ridges, Ryan; Huang, David; Jacobs, Deborah S

    2012-01-01

    Purpose. To assess optical coherence tomography (OCT) for guiding design and fit of a prosthetic device for corneal disease. Methods. A prototype time domain OCT scanner was used to image the anterior segment of patients fitted with large diameter (18.5-20 mm) prosthetic devices for corneal disease. OCT images were processed and analyzed to characterize corneal diameter, corneal sagittal height, scleral sagittal height, scleral toricity, and alignment of device. Within-subject variance of OCT-measured parameters was evaluated. OCT-measured parameters were compared with device parameters for each eye fitted. OCT image correspondence with ocular alignment and clinical fit was assessed. Results. Six eyes in 5 patients were studied. OCT measurement of corneal diameter (coefficient of variation, CV = 0.76%), cornea sagittal height (CV = 2.06%), and scleral sagittal height (CV = 3.39%) is highly repeatable within each subject. OCT image-derived measurements reveal strong correlation between corneal sagittal height and device corneal height (r = 0.975) and modest correlation between scleral and on-eye device toricity (r = 0.581). Qualitative assessment of a fitted device on OCT montages reveals correspondence with slit lamp images and clinical assessment of fit. Conclusions. OCT imaging of the anterior segment is suitable for custom design and fit of large diameter (18.5-20 mm) prosthetic devices used in the treatment of corneal disease.

  16. Iterative image reconstruction that includes a total variation regularization for radial MRI.

    PubMed

    Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko

    2015-07-01

    This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.

  17. Scheimpflug image-based changes in anterior segment parameters during accommodation induced by short-term reading.

    PubMed

    Lipecz, Agnes; Tsorbatzoglou, Alexis; Hassan, Ziad; Berta, Andras; Modis, Laszlo; Nemeth, Gabor

    2017-05-11

    To analyze the effect of the accommodation on the anterior segment data (corneal and anterior chamber parameters) induced by short-time reading in a healthy, nonpresbyopic adult patient group. Images of both eyes of nonpresbyopic volunteers were captured with a Scheimpflug device (Pentacam HR) in a nonaccommodative state. Fifteen minutes of reading followed and through fixation of the built-in target of Pentacam HR further accommodation was achieved and new images were captured by the device. Anterior segment parameters were observed and the differences were analyzed. Fifty-two healthy eyes of 26 subjects (range 20.04-28.58 years) were analyzed. No significant differences were observed in the keratometric values before and after the accommodative task (p = 0.35). A statistically significant difference was measured in the 5.0-mm-diameter and the 7.0-mm-diameter corneal volume (p = 0.01 and p = 0.03) between accommodation states. Corneal aberrometric data did not change significantly during short-term accommodation. Significant differences were observed between nonaccommodative and accommodative states of the eyes for all measured anterior chamber parameters. Among the parameters of the cornea, only corneal volume changed during the short-term accommodation process, showing some fine changes with accommodation of the cornea in young, emmetropic patients. The position of the pupil and the anterior chamber parameters were observed to change with accommodation as captured by a Scheimpflug device.

  18. Counting pollen grains using readily available, free image processing and analysis software.

    PubMed

    Costa, Clayton M; Yang, Suann

    2009-10-01

    Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.

  19. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  20. FibrilJ: ImageJ plugin for fibrils' diameter and persistence length determination

    NASA Astrophysics Data System (ADS)

    Sokolov, P. A.; Belousov, M. V.; Bondarev, S. A.; Zhouravleva, G. A.; Kasyanenko, N. A.

    2017-05-01

    Application of microscopy to evaluate the morphology and size of filamentous proteins and amyloids requires new and creative approaches to simplify and automate the image processing. The estimation of mean values of fibrils diameter, length and bending stiffness on micrographs is a major challenge. For this purpose we developed an open-source FibrilJ plugin for the ImageJ/FiJi program. It automatically recognizes the fibrils on the surface of a mica, silicon, gold or formvar film and further analyzes them to calculate the distribution of fibrils by diameters, lengths and persistence lengths. The plugin has been validated by the processing of TEM images of fibrils formed by Sup35NM yeast protein and artificially created images of rod-shape objects with predefined parameters. Novel data obtained by SEM for Sup35NM protein fibrils immobilized on silicon and gold substrates are also presented and analyzed.

  1. Bioinspired Polarization Imaging Sensors: From Circuits and Optics to Signal Processing Algorithms and Biomedical Applications

    PubMed Central

    York, Timothy; Powell, Samuel B.; Gao, Shengkui; Kahan, Lindsey; Charanya, Tauseef; Saha, Debajit; Roberts, Nicholas W.; Cronin, Thomas W.; Marshall, Justin; Achilefu, Samuel; Lake, Spencer P.; Raman, Baranidharan; Gruev, Viktor

    2015-01-01

    In this paper, we present recent work on bioinspired polarization imaging sensors and their applications in biomedicine. In particular, we focus on three different aspects of these sensors. First, we describe the electro–optical challenges in realizing a bioinspired polarization imager, and in particular, we provide a detailed description of a recent low-power complementary metal–oxide–semiconductor (CMOS) polarization imager. Second, we focus on signal processing algorithms tailored for this new class of bioinspired polarization imaging sensors, such as calibration and interpolation. Third, the emergence of these sensors has enabled rapid progress in characterizing polarization signals and environmental parameters in nature, as well as several biomedical areas, such as label-free optical neural recording, dynamic tissue strength analysis, and early diagnosis of flat cancerous lesions in a murine colorectal tumor model. We highlight results obtained from these three areas and discuss future applications for these sensors. PMID:26538682

  2. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  3. Deep learning classifier with optical coherence tomography images for early dental caries detection

    NASA Astrophysics Data System (ADS)

    Karimian, Nima; Salehi, Hassan S.; Mahdian, Mina; Alnajjar, Hisham; Tadinada, Aditya

    2018-02-01

    Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets. The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer). Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone, trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.

  4. Self-calibration of a noisy multiple-sensor system with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Brooks, Richard R.; Iyengar, S. Sitharama; Chen, Jianhua

    1996-01-01

    This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.

  5. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality.

    PubMed

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K

    2017-01-01

    The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.

  6. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  7. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  8. Long-range non-contact imaging photoplethysmography: cardiac pulse wave sensing at a distance

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.; Piasecki, Alyssa M.; Bowers, Margaret A.; Klosterman, Samantha L.

    2016-03-01

    Non-contact, imaging photoplethysmography uses photo-optical sensors to measure variations in light absorption, caused by blood volume pulsations, to assess cardiopulmonary parameters including pulse rate, pulse rate variability, and respiration rate. Recently, researchers have studied the applications and methodology of imaging photoplethysmography. Basic research has examined some of the variables affecting data quality and accuracy of imaging photoplethysmography including signal processing, imager parameters (e.g. frame rate and resolution), lighting conditions, subject motion, and subject skin tone. This technology may be beneficial for long term or continuous monitoring where contact measurements may be harmful (e.g. skin sensitivities) or where imperceptible or unobtrusive measurements are desirable. Using previously validated signal processing methods, we examined the effects of imager-to-subject distance on one-minute, windowed estimates of pulse rate. High-resolution video of 22, stationary participants was collected using an enthusiast-grade, mirrorless, digital camera equipped with a fully-manual, super-telephoto lens at distances of 25, 50, and 100 meters with simultaneous contact measurements of electrocardiography, and fingertip photoplethysmography. By comparison, previous studies have usually been conducted with imager-to-subject distances of up to only a few meters. Mean absolute error for one-minute, windowed, pulse rate estimates (compared to those derived from gold-standard electrocardiography) were 2.0, 4.1, and 10.9 beats per minute at distances of 25, 50, and 100 meters, respectively. Long-range imaging presents several unique challenges among which include decreased, observed light reflectance and smaller regions of interest. Nevertheless, these results demonstrate that accurate pulse rate measurements can be obtained from over long imager-to-participant distances given these constraints.

  9. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Invariant polarimetric contrast parameters of coherent light.

    PubMed

    Réfrégier, Philippe; Goudail, François

    2002-06-01

    Many applications use an active coherent illumination and analyze the variation of the polarization state of optical signals. However, as a result of the use of coherent light, these signals are generally strongly perturbed with speckle noise. This is the case, for example, for active polarimetric imaging systems that are useful for enhancing contrast between different elements in a scene. We propose a rigorous definition of the minimal set of parameters that characterize the difference between two coherent and partially polarized states. Indeed, two states of partially polarized light are a priori defined by eight parameters, for example, their two Stokes vectors. We demonstrate that the processing performance for such signal processing tasks as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by two scalar functions of these eight parameters. These two scalar functions are the invariant parameters that define the polarimetric contrast between two polarized states of coherent light. Different polarization configurations with the same invariant contrast parameters will necessarily lead to the same performance for a given task, which is a desirable quality for a rigorous contrast measure. The definition of these polarimetric contrast parameters simplifies the analysis and the specification of processing techniques for coherent polarimetric signals.

  11. Multipathing Via Three Parameter Common Image Gathers (CIGs) From Reverse Time Migration

    NASA Astrophysics Data System (ADS)

    Ostadhassan, M.; Zhang, X.

    2015-12-01

    A noteworthy problem for seismic exploration is effects of multipathing (both wanted or unwanted) caused by subsurface complex structures. We show that reverse time migration (RTM) combined with a unified, systematic three parameter framework that flexibly handles multipathing can be accomplished by adding one more dimension (image time) to the angle domain common image gather (ADCIG) data. RTM is widely used to generate prestack depth migration images. When using the cross-correlation image condition in 2D prestack migration in RTM, the usual practice is to sum over all the migration time steps. Thus all possible wave types and paths automatically contribute to the resulting image, including destructive wave interferences, phase shifts, and other distortions. One reason is that multipath (prismatic wave) contributions are not properly sorted and mapped in the ADCIGs. Also, multipath arrivals usually have different instantaneous attributes (amplitude, phase and frequency), and if not separated, the amplitudes and phases in the final prestack image will not stack coherently across sources. A prismatic path satisfies an image time for it's unique path; Cavalca and Lailly (2005) show that RTM images with multipaths can provide more complete target information in complex geology, as multipaths usually have different incident angles and amplitudes compared to primary reflections. If the image time slices within a cross-correlation common-source migration are saved for each image time, this three-parameter (incident angle, depth, image time) volume can be post-processed to generate separate, or composite, images of any desired subset of the migrated data. Images can by displayed for primary contributions, any combination of primary and multipath contributions (with or without artifacts), or various projections, including the conventional ADCIG (angle vs depth) plane. Examples show that signal from the true structure can be separated from artifacts caused by multiple arrivals when they have different image times. This improves the quality of images and benefits migration velocity analysis (MVA) and amplitude variation with angle (AVA) inversion.

  12. Photogrammetric 3d Building Reconstruction from Thermal Images

    NASA Astrophysics Data System (ADS)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  13. Physics-based approach to color image enhancement in poor visibility conditions.

    PubMed

    Tan, K K; Oakley, J P

    2001-10-01

    Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.

  14. Qualification process of CR system and quantification of digital image quality

    NASA Astrophysics Data System (ADS)

    Garnier, P.; Hun, L.; Klein, J.; Lemerle, C.

    2013-01-01

    CEA Valduc uses several X-Ray generators to carry out many inspections: void search, welding expertise, gap measurements, etc. Most of these inspections are carried out on silver based plates. For several years, the CEA/Valduc has decided to qualify new devices such as digital plates or CCD/flat panel plates. On one hand, the choice of this technological orientation is to forecast the assumed and eventual disappearance of silver based plates; on the other hand, it is also to keep our skills mastering up-to-date. The main improvement brought by numerical plates is the continuous progress of the measurement accuracy, especially with image data processing. It is now common to measure defects thickness or depth position within a part. In such applications, data image processing is used to obtain complementary information compared to scanned silver based plates. This scanning procedure is harmful for measurements which imply a data corruption of the resolution, the adding of numerical noise and is time expensive. Digital plates enable to suppress the scanning procedure and to increase resolution. It is nonetheless difficult to define, for digital images, single criteria for the image quality. A procedure has to be defined in order to estimate quality of the digital data itself; the impact of the scanning device and the configuration parameters are also to be taken into account. This presentation deals with the qualification process developed by CEA/Valduc for digital plates (DUR-NDT) based on the study of quantitative criteria chosen to define a direct numerical image quality that could be compared with scanned silver based pictures and the classical optical density. The versatility of the X-Ray parameters is also discussed (X-ray tension, intensity, time exposure). The aim is to be able to transfer the year long experience of CEA/Valduc with silver-based plates inspection to these new digital plates supports. This is an industrial stake.

  15. Assessment of diffusion tensor image quality across sites and vendors using the American College of Radiology head phantom.

    PubMed

    Wang, Zhiyue J; Seo, Youngseob; Babcock, Evelyn; Huang, Hao; Bluml, Stefan; Wisnowski, Jessica; Holshouser, Barbara; Panigrahy, Ashok; Shaw, Dennis W W; Altman, Nolan; McColl, Roderick W; Rollins, Nancy K

    2016-05-08

    The purpose of this study was to explore the feasibility of assessing quality of diffusion tensor imaging (DTI) from multiple sites and vendors using American College of Radiology (ACR) phantom. Participating sites (Siemens (n = 2), GE (n= 2), and Philips (n = 4)) reached consensus on parameters for DTI and used the widely available ACR phantom. Tensor data were processed at one site. B0 and eddy current distortions were assessed using grid line displacement on phantom Slice 5; signal-to-noise ratio (SNR) was measured at the center and periphery of the b = 0 image; fractional anisotropy (FA) and mean diffusivity (MD) were assessed using phantom Slice 7. Variations of acquisition parameters and deviations from specified sequence parameters were recorded. Nonlinear grid line distortion was higher with linear shimming and could be corrected using the 2nd order shimming. Following image registration, eddy current distortion was consistently smaller than acquisi-tion voxel size. SNR was consistently higher in the image periphery than center by a factor of 1.3-2.0. ROI-based FA ranged from 0.007 to 0.024. ROI-based MD ranged from 1.90 × 10-3 to 2.33 × 10-3 mm2/s (median = 2.04 × 10-3 mm2/s). Two sites had image void artifacts. The ACR phantom can be used to compare key qual-ity measures of diffusion images acquired from multiple vendors at multiple sites.

  16. Qualitative and quantitative interpretation of SEM image using digital image processing.

    PubMed

    Saladra, Dawid; Kopernik, Magdalena

    2016-10-01

    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  17. In Vivo Fluorescence Resonance Energy Transfer Imaging for Targeted Anti-Cancer Drug Delivery Kinetics

    NASA Astrophysics Data System (ADS)

    Webb, Kevin; Gaind, Vaibhav; Tsai, Hsiaorho; Bentz, Brian; Chelvam, Venkatesh; Low, Philip

    2012-02-01

    We describe an approach for the evaluation of targeted anti-cancer drug delivery in vivo. The method emulates the drug release and activation process through acceptor release from a targeted donor-acceptor pair that exhibits fluorescence resonance energy transfer (FRET). In this case, folate targeting of the cancer cells is used - 40 % of all human cancers, including ovarian, lung, breast, kidney, brain and colon cancer, over-express folate receptors. We demonstrate the reconstruction of the spatially-dependent FRET parameters in a mouse model and in tissue phantoms. The FRET parameterization is incorporated into a source for a diffusion equation model for photon transport in tissue, in a variant of optical diffusion tomography (ODT) called FRET-ODT. In addition to the spatially-dependent tissue parameters in the diffusion model (absorption and diffusion coefficients), the FRET parameters (donor-acceptor distance and yield) are imaged as a function of position. Modulated light measurements are made with various laser excitation positions and a gated camera. More generally, our method provides a new vehicle for studying disease at the molecular level by imaging FRET parameters in deep tissue, and allows the nanometer FRET ruler to be utilized in deep tissue.

  18. Image denoising based on noise detection

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  19. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  20. Three-dimensional real-time imaging of bi-phasic flow through porous media

    NASA Astrophysics Data System (ADS)

    Sharma, Prerna; Aswathi, P.; Sane, Anit; Ghosh, Shankar; Bhattacharya, S.

    2011-11-01

    We present a scanning laser-sheet video imaging technique to image bi-phasic flow in three-dimensional porous media in real time with pore-scale spatial resolution, i.e., 35 μm and 500 μm for directions parallel and perpendicular to the flow, respectively. The technique is illustrated for the case of viscous fingering. Using suitable image processing protocols, both the morphology and the movement of the two-fluid interface, were quantitatively estimated. Furthermore, a macroscopic parameter such as the displacement efficiency obtained from a microscopic (pore-scale) analysis demonstrates the versatility and usefulness of the method.

  1. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  2. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  3. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  4. Automatic rocks detection and classification on high resolution images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.

    2013-12-01

    High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.

  5. CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre

    2016-01-20

    Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). Wemore » calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.« less

  6. Quantitative phase-digital holographic microscopy: a new imaging modality to identify original cellular biomarkers of diseases

    NASA Astrophysics Data System (ADS)

    Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.

    2016-03-01

    Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.

  7. Surface topography analysis and performance on post-CMP images (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lee, Jusang; Bello, Abner F.; Kakita, Shinichiro; Pieniazek, Nicholas; Johnson, Timothy A.

    2017-03-01

    Surface topography on post-CMP processing can be measured with white light interference microscopy to determine the planarity. Results are used to avoid under or over polishing and to decrease dishing. The numerical output of the surface topography is the RMS (root-mean-square) of the height. Beyond RMS, the topography image is visually examined and not further quantified. Subjective comparisons of the height maps are used to determine optimum CMP process conditions. While visual comparison of height maps can determine excursions, it's only through manual inspection of the images. In this work we describe methods of quantifying post-CMP surface topography characteristics that are used in other technical fields such as geography and facial-recognition. The topography image is divided into small surface patches of 7x7 pixels. Each surface patch is fitted to an analytic surface equation, in this case a third order polynomial, from which the gradient, directional derivatives, and other characteristics are calculated. Based on the characteristics, the surface patch is labeled as peak, ridge, flat, saddle, ravine, pit or hillside. The number of each label and thus the associated histogram is then used as a quantified characteristic of the surface topography, and could be used as a parameter for SPC (statistical process control) charting. In addition, the gradient for each surface patch is calculated, so the average, maximum, and other characteristics of the gradient distribution can be used for SPC. Repeatability measurements indicate high confidence where individual labels can be lower than 2% relative standard deviation. When the histogram is considered, an associated chi-squared value can be defined from which to compare other measurements. The chi-squared value of the histogram is a very sensitive and quantifiable parameter to determine the within wafer and wafer-to-wafer topography non-uniformity. As for the gradient histogram distribution, the chi-squared could again be calculated and used as yet another quantifiable parameter for SPC. In this work we measured the post Cu CMP of a die designed for 14nm technology. A region of interest (ROI) known to be indicative of the CMP processing is chosen for the topography analysis. The ROI, of size 1800 x 2500 pixels where each pixel represents 2um, was repeatably measured. We show the sensitivity based on measurements and the comparison between center and edge die measurements. The topography measurements and surface patch analysis were applied to hundreds of images representing the periodic process qualification runs required to control and verify CMP performance and tool matching. The analysis is shown to be sensitive to process conditions that vary in polishing time, type of slurry, CMP tool manufacturer, and CMP pad lifetime. Keywords: Keywords: CMP, Topography, Image Processing, Metrology, Interference microscopy, surface processing [1] De Lega, Xavier Colonna, and Peter De Groot. "Optical topography measurement of patterned wafers." Characterization and Metrology for ULSI Technology 2005 788 (2005): 432-436. [2] de Groot, Peter. "Coherence scanning interferometry." Optical Measurement of Surface Topography. Springer Berlin Heidelberg, 2011. 187-208. [3] Watson, Layne T., Thomas J. Laffey, and Robert M. Haralick. "Topographic classification of digital image intensity surfaces using generalized splines and the discrete cosine transformation." Computer Vision, Graphics, and Image Processing 29.2 (1985): 143-167. [4] Wang, Jun, et al. "3D facial expression recognition based on primitive surface feature distribution." Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.

  8. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  9. Dual-energy computed tomography in patients with cutaneous malignant melanoma: Comparison of noise-optimized and traditional virtual monoenergetic imaging.

    PubMed

    Martin, Simon S; Wichmann, Julian L; Weyer, Hendrik; Albrecht, Moritz H; D'Angelo, Tommaso; Leithner, Doris; Lenga, Lukas; Booz, Christian; Scholtz, Jan-Erik; Bodelle, Boris; Vogl, Thomas J; Hammerstingl, Renate

    2017-10-01

    The aim of this study was to investigate the impact of noise-optimized virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with cutaneous malignant melanoma at thoracoabdominal dual-energy computed tomography (DECT). Seventy-six patients (48 men; 66.6±13.8years) with metastatic cutaneous malignant melanoma underwent DECT of the thorax and abdomen. Images were post-processed with standard linear blending (M_0.6), traditional virtual monoenergetic (VMI), and VMI+ technique. VMI and VMI+ images were reconstructed in 10-keV intervals from 40 to 100keV. Attenuation measurements were performed in cutaneous melanoma lesions, as well as in regional lymph node, subcutaneous and in-transit metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Five-point scales were used to evaluate overall image quality and lesion delineation by three radiologists with different levels of experience. Objective indices SNR and CNR were highest at 40-keV VMI+ series (5.6±2.6 and 12.4±3.4), significantly superior to all other reconstructions (all P<0.001). Qualitative image parameters showed highest values for 50-keV and 60-keV VMI+ reconstructions (median 5, respectively; P≤0.019) regarding overall image quality. Moreover, qualitative assessment of lesion delineation peaked in 40-keV VMI+ (median 5) and 50-keV VMI+ (median 4; P=0.055), significantly superior to all other reconstructions (all P<0.001). Low-keV noise-optimized VMI+ reconstructions substantially increase quantitative and qualitative image parameters, as well as subjective lesion delineation compared to standard image reconstruction and traditional VMI in patients with cutaneous malignant melanoma at thoracoabdominal DECT. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The impact of the condenser on cytogenetic image quality in digital microscope system.

    PubMed

    Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong

    2013-01-01

    Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.

  11. On the use of water phantom images to calibrate and correct eddy current induced artefacts in MR diffusion tensor imaging.

    PubMed

    Bastin, M E; Armitage, P A

    2000-07-01

    The accurate determination of absolute measures of diffusion anisotropy in vivo using single-shot, echo-planar imaging techniques requires the acquisition of a set of high signal-to-noise ratio, diffusion-weighted images that are free from eddy current induced image distortions. Such geometric distortions can be characterized and corrected in brain imaging data using magnification (M), translation (T), and shear (S) distortion parameters derived from separate water phantom calibration experiments. Here we examine the practicalities of using separate phantom calibration data to correct high b-value diffusion tensor imaging data by investigating the stability of these distortion parameters, and hence the eddy currents, with time. It is found that M, T, and S vary only slowly with time (i.e., on the order of weeks), so that calibration scans need not be performed after every patient examination. This not only minimises the scan time required to collect the calibration data, but also the computational time needed to characterize these eddy current induced distortions. Examples of how measurements of diffusion anisotropy are improved using this post-processing scheme are also presented.

  12. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Sentinel-2: State of the Image Quality Calibration at the End of the Commissioning

    NASA Astrophysics Data System (ADS)

    Tremas, Thierry; Lonjou, Vincent; Lacherade, Sophie; Gaudel-Vacaresse, Angelique; Languille, Florie

    2016-08-01

    This article summarizes the activity of CNES during the In Orbit Calibration Phase of Sentinel 2A as well as the transfer of production of GIPP (Ground Image Processing Parameters) from CNES to ESRIN. The state of the main calibration parameters and performances, few months before PDGS is declared fully operational, are listed and explained.In radiometry a special attention is paid to the absolute calibration using the on-board diffuser, and the vicarious calibration methods using instrumented or statistically well characterized sites and inter- comparisons with other sensors. Regarding geometry, the presentation focuses on the performances of absolute location with and without reference points. The requirements of multi-band and multi-temporal registration are exposed. Finally, the construction and the rule of the GRI (Ground Reference Images) in the future are explained.

  14. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  15. Dynamics of Female Pelvic Floor Function Using Urodynamics, Ultrasound and Magnetic Resonance Imaging (MRI)

    PubMed Central

    Constantinou, Christos E.

    2009-01-01

    In this review the diagnostic potential of evaluating female pelvic floor muscle (PFM)) function using magnetic and ultrasound imaging in the context of urodynamic observations is considered in terms of determining the mechanisms of urinary continence. A new approach is used to consider the dynamics of PFM activity by introducing new parameters derived from imaging. Novel image processing techniques are applied to illustrate the static anatomy and dynamics PFM function of stress incontinent women pre and post operatively as compared to asymptomatic subjects. Function was evaluated from the dynamics of organ displacement produced during voluntary and reflex activation. Technical innovations include the use of ultrasound analysis of movement of structures during maneuvers that are associated with external stimuli. Enabling this approach is the development of criteria and fresh and unique parameters that define the kinematics of PFM function. Principal among these parameters, are displacement, velocity, acceleration and the trajectory of pelvic floor landmarks. To accomplish this objective, movement detection, including motion tracking algorithms and segmentation algorithms were developed to derive new parameters of trajectory, displacement, velocity and acceleration, and strain of pelvic structures during different maneuvers. Results highlight the importance of timing the movement and deformation to fast and stressful maneuvers, which are important for understanding the neuromuscular control and function of PFM. Furthermore, observations suggest that timing of responses is a significant factor separating the continent from the incontinent subjects. PMID:19303690

  16. Non-invasive quality evaluation of confluent cells by image-based orientation heterogeneity analysis.

    PubMed

    Sasaki, Kei; Sasaki, Hiroto; Takahashi, Atsuki; Kang, Siu; Yuasa, Tetsuya; Kato, Ryuji

    2016-02-01

    In recent years, cell and tissue therapy in regenerative medicine have advanced rapidly towards commercialization. However, conventional invasive cell quality assessment is incompatible with direct evaluation of the cells produced for such therapies, especially in the case of regenerative medicine products. Our group has demonstrated the potential of quantitative assessment of cell quality, using information obtained from cell images, for non-invasive real-time evaluation of regenerative medicine products. However, image of cells in the confluent state are often difficult to evaluate, because accurate recognition of cells is technically difficult and the morphological features of confluent cells are non-characteristic. To overcome these challenges, we developed a new image-processing algorithm, heterogeneity of orientation (H-Orient) processing, to describe the heterogeneous density of cells in the confluent state. In this algorithm, we introduced a Hessian calculation that converts pixel intensity data to orientation data and a statistical profiling calculation that evaluates the heterogeneity of orientations within an image, generating novel parameters that yield a quantitative profile of an image. Using such parameters, we tested the algorithm's performance in discriminating different qualities of cellular images with three types of clinically important cell quality check (QC) models: remaining lifespan check (QC1), manipulation error check (QC2), and differentiation potential check (QC3). Our results show that our orientation analysis algorithm could predict with high accuracy the outcomes of all types of cellular quality checks (>84% average accuracy with cross-validation). Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  17. Elasticity Imaging of Polymeric Media

    PubMed Central

    Sridhar, Mallika; Liu, Jie; Insana, Michael F.

    2009-01-01

    Viscoelastic properties of soft tissues and hydropolymers depend on the strength of molecular bonding forces connecting the polymer matrix and surrounding fluids. The basis for diagnostic imaging is that disease processes alter molecular-scale bonding in ways that vary the measurable stiffness and viscosity of the tissues. This paper reviews linear viscoelastic theory as applied to gelatin hydrogels for the purpose of formulating approaches to molecular-scale interpretation of elasticity imaging in soft biological tissues. Comparing measurements acquired under different geometries, we investigate the limitations of viscoelastic parameters acquired under various imaging conditions. Quasistatic (step-and-hold and low-frequency harmonic) stimuli applied to gels during creep and stress relaxation experiments in confined and unconfined geometries reveal continuous, bimodal distributions of respondance times. Within the linear range of responses, gelatin will behave more like a solid or fluid depending on the stimulus magnitude. Gelatin can be described statistically from a few parameters of low-order rheological models that form the basis of viscoelastic imaging. Unbiased estimates of imaging parameters are obtained only if creep data are acquired for greater than twice the highest retardance time constant and any steady-state viscous response has been eliminated. Elastic strain and retardance time images are found to provide the best combination of contrast and signal strength in gelatin. Retardance times indicate average behavior of fast (1–10 s) fluid flows and slow (50–400 s) matrix restructuring in response to the mechanical stimulus. Insofar as gelatin mimics other polymers, such as soft biological tissues, elasticity imaging can provide unique insights into complex structural and biochemical features of connectives tissues affected by disease. PMID:17408331

  18. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  19. Design and implementation of a biomedical image database (BDIM).

    PubMed

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  1. Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.

    PubMed

    Shen, Shijian; Nie, Xin; Zhang, Xinggan

    2018-02-03

    Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.

  2. Parameter Estimation for the Blind Restoration of Blurred Imagery.

    DTIC Science & Technology

    1986-09-01

    17 Noise Process .... ............. 23 Restoration Methods .... .......... 26 Inverse Filter .... ........... 26 Wiener Filter...of Eq. (155) ....... .................... ... 64 Table 2 Restored Pictures and Noise Variances ........ . 69 v 5𔃼 5- viq °,. r -’ .’S’ .N’% N...restoration system. g(x,y) Degraded image. G(u,v) Discrete Fourier Transform of the degraded image. n(x,y) Noise . N(u,v) Discrete Fourier transform of n

  3. Advanced image processing approach for ET estimation with remote sensing data of varying spectral, spatial and temporal resolutions

    Treesearch

    Sudhanshu Panda; Devendra Amatya; Young Kim; Ge Sun

    2016-01-01

    Evapotranspiration (ET) is one of the most important hydrologic parameters for vegetation growth, carbon sequestration, and other associated biodiversity study and analysis. Plant stomatal conductance, leaf area index, canopy temperature, soil moisture, and wind speed values generally correlate well with ET. It is difficult to estimate these hydrologic parameters of...

  4. Predicting tool life in turning operations using neural networks and image processing

    NASA Astrophysics Data System (ADS)

    Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.

    2018-05-01

    A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

  5. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  6. Physical reconstruction of packed beds and their morphological analysis: core-shell packings as an example.

    PubMed

    Bruns, Stefan; Tallarek, Ulrich

    2011-04-08

    We report a fast, nondestructive, and quantitative approach to characterize the morphology of packed beds of fine particles by their three-dimensional reconstruction from confocal laser scanning microscopy images, exemplarily shown for a 100μm i.d. fused-silica capillary packed with 2.6μm-sized core-shell particles. The presented method is generally applicable to silica-based capillary columns, monolithic or particulate, and comprises column pretreatment, image acquisition, image processing, and statistical analysis of the image data. It defines a unique platform for fundamental comparisons of particulate and monolithic supports using the statistical measures derived from their reconstructions. Received morphological data are column cross-sectional porosity profiles and chord length distributions from the interparticle macropore space, which are a descriptor of local density and can be characterized by a simplified k-gamma distribution. This distribution function provides a parameter of location and a parameter of dispersion which can be correlated to individual chromatographic band broadening processes (i.e., to transchannel and short-range interchannel contributions to eddy dispersion, respectively). Together with the transcolumn porosity profile the presented approach allows to analyze and quantify the packing microstructure from pore to column scale and therefore holds great promise in a comparative study of packing conditions and particle properties, particularly for characterizing and minimizing the packing process-specific heterogeneities in the final bed structure. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Magnetic Resonance Fingerprinting - a promising new approach to obtain standardized imaging biomarkers from MRI.

    PubMed

    2015-04-01

    Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.

  8. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  9. Optimizing parameter choice for FSL-Brain Extraction Tool (BET) on 3D T1 images in multiple sclerosis.

    PubMed

    Popescu, V; Battaglini, M; Hoogstrate, W S; Verfaillie, S C J; Sluimer, I C; van Schijndel, R A; van Dijk, B W; Cover, K S; Knol, D L; Jenkinson, M; Barkhof, F; de Stefano, N; Vrenken, H

    2012-07-16

    Brain atrophy studies often use FSL-BET (Brain Extraction Tool) as the first step of image processing. Default BET does not always give satisfactory results on 3DT1 MR images, which negatively impacts atrophy measurements. Finding the right alternative BET settings can be a difficult and time-consuming task, which can introduce unwanted variability. To systematically analyze the performance of BET in images of MS patients by varying its parameters and options combinations, and quantitatively comparing its results to a manual gold standard. Images from 159 MS patients were selected from different MAGNIMS consortium centers, and 16 different 3DT1 acquisition protocols at 1.5 T or 3T. Before running BET, one of three pre-processing pipelines was applied: (1) no pre-processing, (2) removal of neck slices, or (3) additional N3 inhomogeneity correction. Then BET was applied, systematically varying the fractional intensity threshold (the "f" parameter) and with either one of the main BET options ("B" - bias field correction and neck cleanup, "R" - robust brain center estimation, or "S" - eye and optic nerve cleanup) or none. For comparison, intracranial cavity masks were manually created for all image volumes. FSL-FAST (FMRIB's Automated Segmentation Tool) tissue-type segmentation was run on all BET output images and on the image volumes masked with the manual intracranial cavity masks (thus creating the gold-standard tissue masks). The resulting brain tissue masks were quantitatively compared to the gold standard using Dice overlap coefficient (DOC). Normalized brain volumes (NBV) were calculated with SIENAX. NBV values obtained using for SIENAX other BET settings than default were compared to gold standard NBV with the paired t-test. The parameter/preprocessing/options combinations resulted in 20,988 BET runs. The median DOC for default BET (f=0.5, g=0) was 0.913 (range 0.321-0.977) across all 159 native scans. For all acquisition protocols, brain extraction was substantially improved for lower values of "f" than the default value. Using native images, optimum BET performance was observed for f=0.2 with option "B", giving median DOC=0.979 (range 0.867-0.994). Using neck removal before BET, optimum BET performance was observed for f=0.1 with option "B", giving median DOC 0.983 (range 0.844-0.996). Using the above BET-options for SIENAX instead of default, the NBV values obtained from images after neck removal with f=0.1 and option "B" did not differ statistically from NBV values obtained with gold-standard. Although default BET performs reasonably well on most 3DT1 images of MS patients, the performance can be improved substantially. The removal of the neck slices, either externally or within BET, has a marked positive effect on the brain extraction quality. BET option "B" with f=0.1 after removal of the neck slices seems to work best for all acquisition protocols. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  11. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.

  12. Development of a ground signal processor for digital synthetic array radar data

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1981-01-01

    A modified APQ-102 sidelooking array radar (SLAR) in a B-57 aircraft test bed is used, with other optical and infrared sensors, in remote sensing of Earth surface features for various users at NASA Johnson Space Center. The video from the radar is normally recorded on photographic film and subsequently processed photographically into high resolution radar images. Using a high speed sampling (digitizing) system, the two receiver channels of cross-and co-polarized video are recorded on wideband magnetic tape along with radar and platform parameters. These data are subsequently reformatted and processed into digital synthetic aperture radar images with the image data available on magnetic tape for subsequent analysis by investigators. The system design and results obtained are described.

  13. Registering parameters and granules of wave observations: IMAGE RPI success story

    NASA Astrophysics Data System (ADS)

    Galkin, I. A.; Charisi, A.; Fung, S. F.; Benson, R. F.; Reinisch, B. W.

    2015-12-01

    Modern metadata systems strive to help scientists locate data relevant to their research and then retrieve them quickly. Success of this mission depends on the organization and completeness of metadata. Each relevant data resource has to be registered; each content has to be described; each data file has to be accessible. Ultimately, data discoverability is about the practical ability to describe data content and location. Correspondingly, data registration has a "Parameter" level, at which content is specified by listing available observed properties (parameters), and a "Granule" level, at which download links are given to data records (granules). Until recently, both parameter- and granule-level data registrations were accomplished at NASA Virtual System Observatory easily by listing provided parameters and building Granule documents with URLs to the datafile locations, usually those at NASA CDAWeb data warehouse. With the introduction of the Virtual Wave Observatory (VWO), however, the parameter/granule concept faced a scalability challenge. The wave phenomenon content is rich with descriptors of the wave generation, propagation, interaction with propagation media, and observation processes. Additionally, the wave phenomenon content varies from record to record, reflecting changes in the constituent processes, making it necessary to generate granule documents at sub-minute resolution. We will present the first success story of registering 234,178 records of IMAGE Radio Plasma Imager (RPI) plasmagram data and Level 2 derived data products in ESPAS (near-Earth Space Data Infrastructure for e-Science), using the VWO-inspired wave ontology. The granules are arranged in overlapping display and numerical data collections. Display data include (a) auto-prospected plasmagrams of potential interest, (b) interesting plasmagrams annotated by human analysts or software, and (c) spectacular plasmagrams annotated by analysts as publication-quality examples of the RPI science. Numerical data products include plasmagram-derived records containing signatures of local and remote signal propagation, as well as field-aligned profiles of electron density in the plasmasphere. Registered granules of RPI observations are available in ESPAS for their content-targeted search and retrieval.

  14. A method to investigate the diffusion properties of nuclear calcium.

    PubMed

    Queisser, Gillian; Wittum, Gabriel

    2011-10-01

    Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.

  15. Heterogeneous Optimization Framework: Reproducible Preprocessing of Multi-Spectral Clinical MRI for Neuro-Oncology Imaging Research.

    PubMed

    Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S

    2016-07-01

    Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.

  16. An in situ probe for on-line monitoring of cell density and viability on the basis of dark field microscopy in conjunction with image processing and supervised machine learning.

    PubMed

    Wei, Ning; You, Jia; Friehs, Karl; Flaschel, Erwin; Nattkemper, Tim Wilhelm

    2007-08-15

    Fermentation industries would benefit from on-line monitoring of important parameters describing cell growth such as cell density and viability during fermentation processes. For this purpose, an in situ probe has been developed, which utilizes a dark field illumination unit to obtain high contrast images with an integrated CCD camera. To test the probe, brewer's yeast Saccharomyces cerevisiae is chosen as the target microorganism. Images of the yeast cells in the bioreactors are captured, processed, and analyzed automatically by means of mechatronics, image processing, and machine learning. Two support vector machine based classifiers are used for separating cells from background, and for distinguishing live from dead cells afterwards. The evaluation of the in situ experiments showed strong correlation between results obtained by the probe and those by widely accepted standard methods. Thus, the in situ probe has been proved to be a feasible device for on-line monitoring of both cell density and viability with high accuracy and stability. (c) 2007 Wiley Periodicals, Inc.

  17. Study of process parameter on mist lubrication of Titanium (Grade 5) alloy

    NASA Astrophysics Data System (ADS)

    Maity, Kalipada; Pradhan, Swastik

    2017-02-01

    This paper deals with the machinability of Ti-6Al-4V alloy with mist cooling lubrication using carbide inserts. The influence of process parameter on the cutting forces, evolution of tool wear, surface finish of the workpiece, material removal rate and chip reduction coefficient have been investigated. Weighted principal component analysis coupled with grey relational analysis optimization is applied to identify the optimum setting of the process parameter. Optimal condition of the process parameter was cutting speed at 160 m/min, feed at 0.16 mm/rev and depth of cut at 1.6 mm. Effects of cutting speed and depth of cut on the type of chips formation were observed. Most of the chips forms were long tubular and long helical type. Image analyses of the segmented chip were examined to study the shape and size of the saw tooth profile of serrated chips. It was found that by increasing cutting speed from 95 m/min to 160 m/min, the free surface lamella of the chips increased and the visibility of the saw tooth segment became clearer.

  18. Deformation-induced speckle-pattern evolution and feasibility of correlational speckle tracking in optical coherence elastography.

    PubMed

    Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex

    2015-07-01

    Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.

  19. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  20. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

Top