Sample records for image derived input

  1. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  2. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  3. Relationship between fatigue of generation II image intensifier and input illumination

    NASA Astrophysics Data System (ADS)

    Chen, Qingyou

    1995-09-01

    If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.

  4. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less

  5. Application of image-derived and venous input functions in major depression using [carbonyl-(11)C]WAY-100635.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert

    2013-04-01

    Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  7. Correlation of Tumor Immunohistochemistry with Dynamic Contrast-Enhanced and DSC-MRI Parameters in Patients with Gliomas.

    PubMed

    Nguyen, T B; Cron, G O; Bezzina, K; Perdrizet, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Thornhill, R E; Zanette, B; Cameron, I G

    2016-12-01

    Tumor CBV is a prognostic and predictive marker for patients with gliomas. Tumor CBV can be measured noninvasively with different MR imaging techniques; however, it is not clear which of these techniques most closely reflects histologically-measured tumor CBV. Our aim was to investigate the correlations between dynamic contrast-enhanced and DSC-MR imaging parameters and immunohistochemistry in patients with gliomas. Forty-three patients with a new diagnosis of glioma underwent a preoperative MR imaging examination with dynamic contrast-enhanced and DSC sequences. Unnormalized and normalized cerebral blood volume was obtained from DSC MR imaging. Two sets of plasma volume and volume transfer constant maps were obtained from dynamic contrast-enhanced MR imaging. Plasma volume obtained from the phase-derived vascular input function and bookend T1 mapping (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function and bookend T1 mapping (K trans _Φ) were determined. Plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K trans _SI) were acquired, without T1 mapping. Using CD34 staining, we measured microvessel density and microvessel area within 3 representative areas of the resected tumor specimen. The Mann-Whitney U test was used to test for differences according to grade and degree of enhancement. The Spearman correlation was performed to determine the relationship between dynamic contrast-enhanced and DSC parameters and histopathologic measurements. Microvessel area, microvessel density, dynamic contrast-enhanced, and DSC-MR imaging parameters varied according to the grade and degree of enhancement (P < .05). A strong correlation was found between microvessel area and Vp_Φ and between microvessel area and unnormalized blood volume (r s ≥ 0.61). A moderate correlation was found between microvessel area and normalized blood volume, microvessel area and Vp_SI, microvessel area and K trans _Φ, microvessel area and K trans _SI, microvessel density and Vp_Φ, microvessel density and unnormalized blood volume, and microvessel density and normalized blood volume (0.44 ≤ r s ≤ 0.57). A weaker correlation was found between microvessel density and K trans _Φ and between microvessel density and K trans _SI (r s ≤ 0.41). With dynamic contrast-enhanced MR imaging, use of a phase-derived vascular input function and bookend T1 mapping improves the correlation between immunohistochemistry and plasma volume, but not between immunohistochemistry and the volume transfer constant. With DSC-MR imaging, normalization of tumor CBV could decrease the correlation with microvessel area. © 2016 by American Journal of Neuroradiology.

  8. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E

    2005-06-21

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.

  9. Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierwirth, P.N.; Lee, T.J.; Burne, R.V.

    1993-03-01

    A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less

  10. Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types

    NASA Astrophysics Data System (ADS)

    Hilbert, K.; Lewis, D.; O'Hara, C. G.

    2006-12-01

    The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.

  11. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  12. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  13. Image-derived arterial input function for quantitative fluorescence imaging of receptor-drug binding in vivo

    PubMed Central

    Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2017-01-01

    Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671

  14. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  15. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  16. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  17. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  18. Simplifying [18F]GE-179 PET: are both arterial blood sampling and 90-min acquisitions essential?

    PubMed

    McGinnity, Colm J; Riaño Barros, Daniela A; Trigg, William; Brooks, David J; Hinz, Rainer; Duncan, John S; Koepp, Matthias J; Hammers, Alexander

    2018-06-11

    The NMDA receptor radiotracer [ 18 F]GE-179 has been used with 90-min scans and arterial plasma input functions. We explored whether (1) arterial blood sampling is avoidable and (2) shorter scans are feasible. For 20 existing [ 18 F]GE-179 datasets, we generated (1) standardised uptake values (SUVs) over eight intervals; (2) volume of distribution (V T ) images using population-based input functions (PBIFs), scaled using one parent plasma sample; and (3) V T images using three shortened datasets, using the original parent plasma input functions (ppIFs). Correlations with the original ppIF-derived 90-min V T s increased for later interval SUVs (maximal ρ = 0.78; 80-90 min). They were strong for PBIF-derived V T s (ρ = 0.90), but between-subject coefficient of variation increased. Correlations were very strong for the 60/70/80-min original ppIF-derived V T s (ρ = 0.97-1.00), which suffered regionally variant negative bias. Where arterial blood sampling is available, reduction of scan duration to 60 min is feasible, but with negative bias. The performance of SUVs was more consistent across participants than PBIF-derived V T s.

  19. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  20. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  1. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  2. Distinguishing plant population and variety with UAV-derived vegetation indices

    NASA Astrophysics Data System (ADS)

    Oakes, Joseph; Balota, Maria

    2017-05-01

    Variety selection and seeding rate are two important choice that a peanut grower must make. High yielding varieties can increase profit with no additional input costs, while seeding rate often determines input cost a grower will incur from seed costs. The overall purpose of this study was to examine the effect that seeding rate has on different peanut varieties. With the advent of new UAV technology, we now have the possibility to use indices collected with the UAV to measure emergence, seeding rate, growth rate, and perhaps make yield predictions. This information could enable growers to make management decisions early in the season based on low plant populations due to poor emergence, and could be a useful tool for growers to use to estimate plant population and growth rate in order to help achieve desired crop stands. Red-Green-Blue (RGB) and near-infrared (NIR) images were collected from a UAV platform starting two weeks after planting and continued weekly for the next six weeks. Ground NDVI was also collected each time aerial images were collected. Vegetation indices were derived from both the RGB and NIR images. Greener area (GGA- the proportion of green pixels with a hue angle from 80° to 120°) and a* (the average red/green color of the image) were derived from the RGB images while Normalized Differential Vegetative Index (NDVI) was derived from NIR images. Aerial indices were successful in distinguishing seeding rates and determining emergence during the first few weeks after planting, but not later in the season. Meanwhile, these aerial indices are not an adequate predictor of yield in peanut at this point.

  3. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  4. Cerebral blood flow with [15O]water PET studies using an image-derived input function and MR-defined carotid centerlines

    NASA Astrophysics Data System (ADS)

    Fung, Edward K.; Carson, Richard E.

    2013-03-01

    Full quantitative analysis of brain PET data requires knowledge of the arterial input function into the brain. Such data are normally acquired by arterial sampling with corrections for delay and dispersion to account for the distant sampling site. Several attempts have been made to extract an image-derived input function (IDIF) directly from the internal carotid arteries that supply the brain and are often visible in brain PET images. We have devised a method of delineating the internal carotids in co-registered magnetic resonance (MR) images using the level-set method and applying the segmentations to PET images using a novel centerline approach. Centerlines of the segmented carotids were modeled as cubic splines and re-registered in PET images summed over the early portion of the scan. Using information from the anatomical center of the vessel should minimize partial volume and spillover effects. Centerline time-activity curves were taken as the mean of the values for points along the centerline interpolated from neighboring voxels. A scale factor correction was derived from calculation of cerebral blood flow (CBF) using gold standard arterial blood measurements. We have applied the method to human subject data from multiple injections of [15O]water on the HRRT. The method was assessed by calculating the area under the curve (AUC) of the IDIF and the CBF, and comparing these to values computed using the gold standard arterial input curve. The average ratio of IDIF to arterial AUC (apparent recovery coefficient: aRC) across 9 subjects with multiple (n = 69) injections was 0.49 ± 0.09 at 0-30 s post tracer arrival, 0.45 ± 0.09 at 30-60 s, and 0.46 ± 0.09 at 60-90 s. Gray and white matter CBF values were 61.4 ± 11.0 and 15.6 ± 3.0 mL/min/100 g tissue using sampled blood data. Using IDIF centerlines scaled by the average aRC over each subjects’ injections, gray and white matter CBF values were 61.3 ± 13.5 and 15.5 ± 3.4 mL/min/100 g tissue. Using global average aRC values, the means were unchanged, and intersubject variability was noticeably reduced. This MR-based centerline method with local re-registration to [15O]water PET yields a consistent IDIF over multiple injections in the same subject, thus permitting the absolute quantification of CBF without arterial input function measurements.

  5. Quantification of regional myocardial blood flow estimation with three-dimensional dynamic rubidium-82 PET and modified spillover correction model.

    PubMed

    Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara

    2012-08-01

    Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.

  6. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  7. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  8. Towards quantitative [18F]FDG-PET/MRI of the brain: Automated MR-driven calculation of an image-derived input function for the non-invasive determination of cerebral glucose metabolic rates.

    PubMed

    Sundar, Lalith Ks; Muzik, Otto; Rischka, Lucas; Hahn, Andreas; Rausch, Ivo; Lanzenberger, Rupert; Hienert, Marius; Klebermass, Eva-Maria; Füchsel, Frank-Günther; Hacker, Marcus; Pilz, Magdalena; Pataraia, Ekaterina; Traub-Weidinger, Tatjana; Beyer, Thomas

    2018-01-01

    Absolute quantification of PET brain imaging requires the measurement of an arterial input function (AIF), typically obtained invasively via an arterial cannulation. We present an approach to automatically calculate an image-derived input function (IDIF) and cerebral metabolic rates of glucose (CMRGlc) from the [18F]FDG PET data using an integrated PET/MRI system. Ten healthy controls underwent test-retest dynamic [18F]FDG-PET/MRI examinations. The imaging protocol consisted of a 60-min PET list-mode acquisition together with a time-of-flight MR angiography scan for segmenting the carotid arteries and intermittent MR navigators to monitor subject movement. AIFs were collected as the reference standard. Attenuation correction was performed using a separate low-dose CT scan. Assessment of the percentage difference between area-under-the-curve of IDIF and AIF yielded values within ±5%. Similar test-retest variability was seen between AIFs (9 ± 8) % and the IDIFs (9 ± 7) %. Absolute percentage difference between CMRGlc values obtained from AIF and IDIF across all examinations and selected brain regions was 3.2% (interquartile range: (2.4-4.3) %, maximum < 10%). High test-retest intravariability was observed between CMRGlc values obtained from AIF (14%) and IDIF (17%). The proposed approach provides an IDIF, which can be effectively used in lieu of AIF.

  9. Test-Retest Repeatability of Myocardial Blood Flow Measurements using Rubidium-82 Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Efseaff, Matthew

    Rubidium-82 positron emission tomography (PET) imaging has been proposed for routine myocardial blood flow (MBF) quantification. Few studies have investigated the test-retest repeatability of this method. Same-day repeatability of rest MBF imaging was optimized with a highly automated analysis program using image-derived input functions and a dual spillover correction (SOC). The effects of heterogeneous tracer infusion profiles and subject hemodynamics on test-retest repeatability were investigated at rest and during hyperemic stress. Factors affecting rest MBF repeatability included gender, suspected coronary artery disease, and dual SOC (p < 0.001). The best repeatability coefficient for same-day rest MBF was 0.20 mL/min/g using a six-minute scan-time, iterative reconstruction, dual SOC, resting rate-pressure-product (RPP) adjustment, and a left atrium image-derived input function. The serial study repeatabilities of the optimized protocol in subjects with homogeneous RPPs and tracer infusion profiles was 0.19 and 0.53 mL/min/g at rest and stress, and 0.95 for stress / rest myocardial flow reserve (MFR). Subjects with heterogeneous tracer infusion profiles and hemodynamic conditions had significantly less repeatable MBF measurements at rest, stress, and stress/rest flow reserve (p < 0.05).

  10. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  11. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  12. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  13. Population-based input function and image-derived input function for [¹¹C](R)-rolipram PET imaging: methodology, validation and application to the study of major depressive disorder.

    PubMed

    Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2012-11-15

    Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.

  14. Quantification of 18F-fluorocholine kinetics in patients with prostate cancer.

    PubMed

    Verwer, Eline E; Oprea-Lager, Daniela E; van den Eertwegh, Alfons J M; van Moorselaar, Reindert J A; Windhorst, Albert D; Schwarte, Lothar A; Hendrikse, N Harry; Schuit, Robert C; Hoekstra, Otto S; Lammertsma, Adriaan A; Boellaard, Ronald

    2015-03-01

    Choline kinase is upregulated in prostate cancer, resulting in increased (18)F-fluoromethylcholine uptake. This study used pharmacokinetic modeling to validate the use of simplified methods for quantification of (18)F-fluoromethylcholine uptake in a routine clinical setting. Forty-minute dynamic PET/CT scans were acquired after injection of 204 ± 9 MBq of (18)F-fluoromethylcholine, from 8 patients with histologically proven metastasized prostate cancer. Plasma input functions were obtained using continuous arterial blood-sampling as well as using image-derived methods. Manual arterial blood samples were used for calibration and correction for plasma-to-blood ratio and metabolites. Time-activity curves were derived from volumes of interest in all visually detectable lymph node metastases. (18)F-fluoromethylcholine kinetics were studied by nonlinear regression fitting of several single- and 2-tissue plasma input models to the time-activity curves. Model selection was based on the Akaike information criterion and measures of robustness. In addition, the performance of several simplified methods, such as standardized uptake value (SUV), was assessed. Best fits were obtained using an irreversible compartment model with blood volume parameter. Parent fractions were 0.12 ± 0.4 after 20 min, necessitating individual metabolite corrections. Correspondence between venous and arterial parent fractions was low as determined by the intraclass correlation coefficient (0.61). Results for image-derived input functions that were obtained from volumes of interest in blood-pool structures distant from tissues of high (18)F-fluoromethylcholine uptake yielded good correlation to those for the blood-sampling input functions (R(2) = 0.83). SUV showed poor correlation to parameters derived from full quantitative kinetic analysis (R(2) < 0.34). In contrast, lesion activity concentration normalized to the integral of the blood activity concentration over time (SUVAUC) showed good correlation (R(2) = 0.92 for metabolite-corrected plasma; 0.65 for whole-blood activity concentrations). SUV cannot be used to quantify (18)F-fluoromethylcholine uptake. A clinical compromise could be SUVAUC derived from 2 consecutive static PET scans, one centered on a large blood-pool structure during 0-30 min after injection to obtain the blood activity concentrations and the other a whole-body scan at 30 min after injection to obtain lymph node activity concentrations. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.

    2004-01-01

    Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.

  16. Determination of mango fruit from binary image using randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba

    2015-12-01

    A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves.

  17. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  18. Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography

    NASA Astrophysics Data System (ADS)

    Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki

    2017-03-01

    We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  19. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods

    PubMed Central

    Hancock, Matthew C.; Magnan, Jerry F.

    2016-01-01

    Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453

  20. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods.

    PubMed

    Hancock, Matthew C; Magnan, Jerry F

    2016-10-01

    In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  1. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  2. Combining image-derived and venous input functions enables quantification of serotonin-1A receptors with [carbonyl-11C]WAY-100635 independent of arterial sampling.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert

    2012-08-01

    image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  4. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  5. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  6. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    DOT National Transportation Integrated Search

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  7. Energy Input Flux in the Global Quiet-Sun Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Cormack, Cecilia; Vásquez, Alberto M.; López Fuentes, Marcelo

    We present first results of a novel technique that provides, for the first time, constraints on the energy input flux at the coronal base ( r ∼ 1.025 R {sub ⊙}) of the quiet Sun at a global scale. By combining differential emission measure tomography of EUV images, with global models of the coronal magnetic field, we estimate the energy input flux at the coronal base that is required to maintain thermodynamically stable structures. The technique is described in detail and first applied to data provided by the Extreme Ultraviolet Imager instrument, on board the Solar TErrestrial RElations Observatory mission,more » and the Atmospheric Imaging Assembly instrument, on board the Solar Dynamics Observatory mission, for two solar rotations with different levels of activity. Our analysis indicates that the typical energy input flux at the coronal base of magnetic loops in the quiet Sun is in the range ∼0.5–2.0 × 10{sup 5} (erg s{sup −1} cm{sup −2}), depending on the structure size and level of activity. A large fraction of this energy input, or even its totality, could be accounted for by Alfvén waves, as shown by recent independent observational estimates derived from determinations of the non-thermal broadening of spectral lines in the coronal base of quiet-Sun regions. This new tomography product will be useful for the validation of coronal heating models in magnetohydrodinamic simulations of the global corona.« less

  8. Automated method for relating regional pulmonary structure and function: integration of dynamic multislice CT and thin-slice high-resolution CT

    NASA Astrophysics Data System (ADS)

    Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.

    1993-07-01

    We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.

  9. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    PubMed

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018 Institute of Physics and Engineering in Medicine.

  10. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  11. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  12. Motion-gated acquisition for in vivo optical imaging

    PubMed Central

    Gioux, Sylvain; Ashitate, Yoshitomo; Hutteman, Merlijn; Frangioni, John V.

    2009-01-01

    Wide-field continuous wave fluorescence imaging, fluorescence lifetime imaging, frequency domain photon migration, and spatially modulated imaging have the potential to provide quantitative measurements in vivo. However, most of these techniques have not yet been successfully translated to the clinic due to challenging environmental constraints. In many circumstances, cardiac and respiratory motion greatly impair image quality and∕or quantitative processing. To address this fundamental problem, we have developed a low-cost, field-programmable gate array–based, hardware-only gating device that delivers a phase-locked acquisition window of arbitrary delay and width that is derived from an unlimited number of pseudo-periodic and nonperiodic input signals. All device features can be controlled manually or via USB serial commands. The working range of the device spans the extremes of mouse electrocardiogram (1000 beats per minute) to human respiration (4 breaths per minute), with timing resolution ⩽0.06%, and jitter ⩽0.008%, of the input signal period. We demonstrate the performance of the gating device, including dramatic improvements in quantitative measurements, in vitro using a motion simulator and in vivo using near-infrared fluorescence angiography of beating pig heart. This gating device should help to enable the clinical translation of promising new optical imaging technologies. PMID:20059276

  13. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  14. Detection and quantification of large-vessel inflammation with 11C-(R)-PK11195 PET/CT.

    PubMed

    Lamare, Frederic; Hinz, Rainer; Gaemperli, Oliver; Pugliese, Francesca; Mason, Justin C; Spinks, Terence; Camici, Paolo G; Rimoldi, Ornella E

    2011-01-01

    We investigated whether PET/CT angiography using 11C-(R)-PK11195, a selective ligand for the translocator protein (18 kDa) expressed in activated macrophages, could allow imaging and quantification of arterial wall inflammation in patients with large-vessel vasculitis. Seven patients with systemic inflammatory disorders (3 symptomatic patients with clinical suspicion of active vasculitis and 4 asymptomatic patients) underwent PET with 11C-(R)-PK11195 and CT angiography to colocalize arterial wall uptake of 11C-(R)-PK11195. Tissue regions of interest were defined in bone marrow, lung parenchyma, wall of the ascending aorta, aortic arch, and descending aorta. Blood-derived and image-derived input functions (IFs) were generated. A reversible 1-tissue compartment with 2 kinetic rate constants and a fractional blood volume term were used to fit the time-activity curves to calculate total volume of distribution (VT). The correlation between VT and standardized uptake values was assessed. VT was significantly higher in symptomatic than in asymptomatic patients using both image-derived total plasma IF (0.55±0.15 vs. 0.27±0.12, P=0.009) and image-derived parent plasma IF (1.40±0.50 vs. 0.58±0.25, P=0.018). A good correlation was observed between VT and standardized uptake value (R=0.79; P=0.03). 11C-(R)-PK11195 imaging allows visualization of macrophage infiltration in inflamed arterial walls. Tracer uptake can be quantified with image-derived IF without the need for metabolite corrections and evaluated semiquantitatively with standardized uptake values.

  15. Kinetic quantitation of cerebral PET-FDG studies without concurrent blood sampling: statistical recovery of the arterial input function.

    PubMed

    O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A

    2010-03-01

    Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.

  16. Spectroscopic analysis and in vitro imaging applications of a pH responsive AIE sensor with a two-input inhibit function.

    PubMed

    Zhou, Zhan; Gu, Fenglong; Peng, Liang; Hu, Ying; Wang, Qianming

    2015-08-04

    A novel terpyridine derivative formed stable aggregates in aqueous media (DMSO/H2O = 1/99) with dramatically enhanced fluorescence compared to its organic solution. Moreover, the ultra-violet absorption spectra also demonstrated specific responses to the incorporation of water. The yellow emission at 557 nm changed to a solution with intense greenish luminescence only in the presence of protons and it conformed to a molecular logic gate with a two-input INHIBIT function. This molecular-based material could permeate into live cells and remain undissociated in the cytoplasm. The new aggregation induced emission (AIE) pH type bio-probe permitted easy collection of yellow luminescence images on a fluorescent microscope. As designed, it displayed striking green emission in organelles at low internal pH. This feature enabled the self-assembled structure to have a whole new function for the pH detection within the field of cell imaging.

  17. SAR image segmentation using skeleton-based fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Cao, Yun Yi; Chen, Yan Qiu

    2003-06-01

    SAR image segmentation can be converted to a clustering problem in which pixels or small patches are grouped together based on local feature information. In this paper, we present a novel framework for segmentation. The segmentation goal is achieved by unsupervised clustering upon characteristic descriptors extracted from local patches. The mixture model of characteristic descriptor, which combines intensity and texture feature, is investigated. The unsupervised algorithm is derived from the recently proposed Skeleton-Based Data Labeling method. Skeletons are constructed as prototypes of clusters to represent arbitrary latent structures in image data. Segmentation using Skeleton-Based Fuzzy Clustering is able to detect the types of surfaces appeared in SAR images automatically without any user input.

  18. Global image analysis to determine suitability for text-based image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).

  19. Image enhancement by non-linear extrapolation in frequency space

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)

    1998-01-01

    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.

  20. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  1. Studies of auroral X-ray imaging from high altitude spacecraft

    NASA Technical Reports Server (NTRS)

    Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.

    1980-01-01

    Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.

  2. Noninvasive image derived heart input function for CMRglc measurements in small animal slow infusion FDG PET studies

    NASA Astrophysics Data System (ADS)

    Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido

    2012-12-01

    Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.

  3. Classification of Land Cover and Land Use Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Rottensteiner, Franz; Heipke, Christian

    2018-04-01

    Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.

  4. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  5. Noninvasive PK11195-PET Image Analysis Techniques Can Detect Abnormal Cerebral Microglial Activation in Parkinson's Disease.

    PubMed

    Kang, Yeona; Mozley, P David; Verma, Ajay; Schlyer, David; Henchcliffe, Claire; Gauthier, Susan A; Chiao, Ping C; He, Bin; Nikolopoulou, Anastasia; Logan, Jean; Sullivan, Jenna M; Pryor, Kane O; Hesterman, Jacob; Kothari, Paresh J; Vallabhajosula, Shankar

    2018-05-04

    Neuroinflammation has been implicated in the pathophysiology of Parkinson's disease (PD), which might be influenced by successful neuroprotective drugs. The uptake of [ 11 C](R)-PK11195 (PK) is often considered to be a proxy for neuroinflammation, and can be quantified using the Logan graphical method with an image-derived blood input function, or the Logan reference tissue model using automated reference region extraction. The purposes of this study were (1) to assess whether these noninvasive image analysis methods can discriminate between patients with PD and healthy volunteers (HVs), and (2) to establish the effect size that would be required to distinguish true drug-induced changes from system variance in longitudinal trials. The sample consisted of 20 participants with PD and 19 HVs. Two independent teams analyzed the data to compare the volume of distribution calculated using image-derived input functions (IDIFs), and binding potentials calculated using the Logan reference region model. With all methods, the higher signal-to-background in patients resulted in lower variability and better repeatability than in controls. We were able to use noninvasive techniques showing significantly increased uptake of PK in multiple brain regions of participants with PD compared to HVs. Although not necessarily reflecting absolute values, these noninvasive image analysis methods can discriminate between PD patients and HVs. We see a difference of 24% in the substantia nigra between PD and HV with a repeatability coefficient of 13%, showing that it will be possible to estimate responses in longitudinal, within subject trials of novel neuroprotective drugs. © 2018 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.

  6. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  7. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  8. Evaluation of limited blood sampling population input approaches for kinetic quantification of [18F]fluorothymidine PET data.

    PubMed

    Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula

    2012-03-24

    Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.

  9. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  10. Roles of Fog and Topography in Redwood Forest Hydrology

    NASA Astrophysics Data System (ADS)

    Francis, E. J.; Asner, G. P.

    2017-12-01

    Spatial variability of water in forests is a function of both climatic gradients that control water inputs and topo-edaphic variation that determines the flows of water belowground, as well as interactions of climate with topography. Coastal redwood forests are hydrologically unique because they are influenced by coastal low clouds, or fog, that is advected onto land by a strong coastal-to-inland temperature difference. Where fog intersects the land surface, annual water inputs from summer fog drip can be greater than that of winter rainfall. In this study, we take advantage of mapped spatial gradients in forest canopy water storage, topography, and fog cover in California to better understand the roles and interactions of fog and topography in the hydrology of redwood forests. We test a conceptual model of redwood forest hydrology with measurements of canopy water content derived from high-resolution airborne imaging spectroscopy, topographic variables derived from high-resolution LiDAR data, and fog cover maps derived from NASA MODIS data. Landscape-level results provide insight into hydrological processes within redwood forests, and cross-site analyses shed light on their generality.

  11. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  12. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  13. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  14. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  15. Combining MRI With PET for Partial Volume Correction Improves Image-Derived Input Functions in Mice

    NASA Astrophysics Data System (ADS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C.; Ansorge, Richard E.; Krieg, Thomas; Carpenter, T. Adrian; Sawiak, Stephen J.

    2015-06-01

    Accurate kinetic modelling using dynamic PET requires knowledge of the tracer concentration in plasma, known as the arterial input function (AIF). AIFs are usually determined by invasive blood sampling, but this is prohibitive in murine studies due to low total blood volumes. As a result of the low spatial resolution of PET, image-derived input functions (IDIFs) must be extracted from left ventricular blood pool (LVBP) ROIs of the mouse heart. This is challenging because of partial volume and spillover effects between the LVBP and myocardium, contaminating IDIFs with tissue signal. We have applied the geometric transfer matrix (GTM) method of partial volume correction (PVC) to 12 mice injected with 18F - FDG affected by a Myocardial Infarction (MI), of which 6 were treated with a drug which reduced infarction size [1]. We utilised high resolution MRI to assist in segmenting mouse hearts into 5 classes: LVBP, infarcted myocardium, healthy myocardium, lungs/body and background. The signal contribution from these 5 classes was convolved with the point spread function (PSF) of the Cambridge split magnet PET scanner and a non-linear fit was performed on the 5 measured signal components. The corrected IDIF was taken as the fitted LVBP component. It was found that the GTM PVC method could recover an IDIF with less contamination from spillover than an IDIF extracted from PET data alone. More realistic values of Ki were achieved using GTM IDIFs, which were shown to be significantly different (p <; 0.05) between the treated and untreated groups.

  16. Coherent active polarization control without loss

    NASA Astrophysics Data System (ADS)

    Ye, Yuqian; Hay, Darrick; Shi, Zhimin

    2017-11-01

    We propose a lossless active polarization control mechanism utilizing an anisotropic dielectric medium with two coherent inputs. Using scattering matrix analysis, we derive analytically the required optical properties of the anisotropic medium that can behave as a switchable polarizing beam splitter. We also show that such a designed anisotropic medium can produce linearly polarized light at any azimuthal direction through coherent control of two inputs with a specific polarization state. Furthermore, we present a straightforward design-on-demand procedure of a subwavelength-thick metastructure that can possess the desired optical anisotropy at a flexible working wavelength. Our lossless coherent polarization control technique may lead to fast, broadband and integrated polarization control elements for applications in imaging, spectroscopy, and telecommunication.

  17. Positron emission tomography/magnetic resonance hybrid scanner imaging of cerebral blood flow using 15O-water positron emission tomography and arterial spin labeling magnetic resonance imaging in newborn piglets

    PubMed Central

    Andersen, Julie B; Henning, William S; Lindberg, Ulrich; Ladefoged, Claes N; Højgaard, Liselotte; Greisen, Gorm; Law, Ian

    2015-01-01

    Abnormality in cerebral blood flow (CBF) distribution can lead to hypoxic–ischemic cerebral damage in newborn infants. The aim of the study was to investigate minimally invasive approaches to measure CBF by comparing simultaneous 15O-water positron emission tomography (PET) and single TI pulsed arterial spin labeling (ASL) magnetic resonance imaging (MR) on a hybrid PET/MR in seven newborn piglets. Positron emission tomography was performed with IV injections of 20 MBq and 100 MBq 15O-water to confirm CBF reliability at low activity. Cerebral blood flow was quantified using a one-tissue-compartment-model using two input functions: an arterial input function (AIF) or an image-derived input function (IDIF). The mean global CBF (95% CI) PET-AIF, PET-IDIF, and ASL at baseline were 27 (23; 32), 34 (31; 37), and 27 (22; 32) mL/100 g per minute, respectively. At acetazolamide stimulus, PET-AIF, PET-IDIF, and ASL were 64 (55; 74), 76 (70; 83) and 79 (67; 92) mL/100 g per minute, respectively. At baseline, differences between PET-AIF, PET-IDIF, and ASL were 22% (P<0.0001) and −0.7% (P=0.9). At acetazolamide, differences between PET-AIF, PET-IDIF, and ASL were 19% (P=0.001) and 24% (P=0.0003). In conclusion, PET-IDIF overestimated CBF. Injected activity of 20 MBq 15O-water had acceptable concordance with 100 MBq, without compromising image quality. Single TI ASL was questionable for regional CBF measurements. Global ASL CBF and PET CBF were congruent during baseline but not during hyperperfusion. PMID:26058699

  18. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  19. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  20. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  1. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  2. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  3. BOREAS RSS-8 BIOME-BGC SSA Simulation of Annual Water and Carbon Fluxes

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    The BOREAS RSS-8 team performed research to evaluate the effect of seasonal weather and landcover heterogeneity on boreal forest regional water and carbon fluxes using a process-level ecosystem model, BIOME-BGC, coupled with remote sensing-derived parameter maps of key state variables. This data set contains derived maps of landcover type and crown and stem biomass as model inputs to determine annual evapotranspiration, gross primary production, autotrophic respiration, and net primary productivity within the BOREAS SSA-MSA, at a 30-m spatial resolution. Model runs were conducted over a 3-year period from 1994-1996; images are provided for each of those years. The data are stored in binary image format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  4. SU-G-IeP3-11: On the Utility of Pixel Variance to Characterize Noise for Image Receptors of Digital Radiography Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, C; Dave, J

    Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less

  5. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  6. Forecasting tidal marsh elevation and habitat change through fusion of Earth observations and a process model

    USGS Publications Warehouse

    Byrd, Kristin B.; Windham-Myers, Lisamarie; Leeuw, Thomas; Downing, Bryan D.; Morris, James T.; Ferner, Matthew C.

    2016-01-01

    Reducing uncertainty in data inputs at relevant spatial scales can improve tidal marsh forecasting models, and their usefulness in coastal climate change adaptation decisions. The Marsh Equilibrium Model (MEM), a one-dimensional mechanistic elevation model, incorporates feedbacks of organic and inorganic inputs to project elevations under sea-level rise scenarios. We tested the feasibility of deriving two key MEM inputs—average annual suspended sediment concentration (SSC) and aboveground peak biomass—from remote sensing data in order to apply MEM across a broader geographic region. We analyzed the precision and representativeness (spatial distribution) of these remote sensing inputs to improve understanding of our study region, a brackish tidal marsh in San Francisco Bay, and to test the applicable spatial extent for coastal modeling. We compared biomass and SSC models derived from Landsat 8, DigitalGlobe WorldView-2, and hyperspectral airborne imagery. Landsat 8-derived inputs were evaluated in a MEM sensitivity analysis. Biomass models were comparable although peak biomass from Landsat 8 best matched field-measured values. The Portable Remote Imaging Spectrometer SSC model was most accurate, although a Landsat 8 time series provided annual average SSC estimates. Landsat 8-measured peak biomass values were randomly distributed, and annual average SSC (30 mg/L) was well represented in the main channels (IQR: 29–32 mg/L), illustrating the suitability of these inputs across the model domain. Trend response surface analysis identified significant diversion between field and remote sensing-based model runs at 60 yr due to model sensitivity at the marsh edge (80–140 cm NAVD88), although at 100 yr, elevation forecasts differed less than 10 cm across 97% of the marsh surface (150–200 cm NAVD88). Results demonstrate the utility of Landsat 8 for landscape-scale tidal marsh elevation projections due to its comparable performance with the other sensors, temporal frequency, and cost. Integration of remote sensing data with MEM should advance regional projections of marsh vegetation change by better parameterizing MEM inputs spatially. Improving information for coastal modeling will support planning for ecosystem services, including habitat, carbon storage, and flood protection.

  7. Imaging Faults in Carbonate Reservoir using Full Waveform Inversion and Reverse Time Migration of Walkaway VSP Data

    NASA Astrophysics Data System (ADS)

    Takam Takougang, E. M.; Bouzidi, Y.

    2016-12-01

    Multi-offset Vertical Seismic Profile (walkaway VSP) data were collected in an oil field located in a shallow water environment dominated by carbonate rocks, offshore the United Arab Emirates. The purpose of the survey was to provide structural information of the reservoir, around and away from the borehole. Five parallel lines were collected using an air gun at 25 m shot interval and 4 m source depth. A typical recording tool with 20 receivers spaced every 15.1 m, and located in a deviated borehole with an angle varying between 0 and 24 degree from the vertical direction, was used to record the data. The recording tool was deployed at different depths for each line, from 521 m to 2742 m depth. Smaller offsets were used for shallow receivers and larger offsets for deeper receivers. The lines merged to form the input dataset for waveform tomography. The total length of the combined lines was 9 km, containing 1344 shots and 100 receivers in the borehole located half-way down. Acoustic full waveform inversion was applied in the frequency domain to derive a high resolution velocity model. The final velocity model derived after the inversion using the frequencies 5-40 Hz, showed good correlation with velocities estimated from vertical incidence VSP and sonic log, confirming the success of the inversion. The velocity model showed anomalous low values in areas that correlate with known location of hydrocarbon reservoir. Pre-stack depth Reverse time migration was then applied using the final velocity model from waveform inversion and the up-going wavefield from the input data. The final estimated source signature from waveform inversion was used as input source for reverse time migration. To save computational memory and time, every 3 shots were used during reverse time migration and the data were low-pass filtered to 30 Hz. Migration artifacts were attenuated using a second order derivative filter. The final migration image shows a good correlation with the waveform tomography velocity model, and highlights a complex network of faults in the reservoir, that could be useful in understanding fluid and hydrocarbon movements. This study shows that the combination of full waveform tomography and reverse time migration can provide high resolution images that can enhance interpretation and characterization of oil reservoirs.

  8. Grafting polyethylenimine with quinoline derivatives for targeted imaging of intracellular Zn(2+) and logic gate operations.

    PubMed

    Pan, Yi; Shi, Yupeng; Chen, Junying; Wong, Chap-Mo; Zhang, Heng; Li, Mei-Jin; Li, Cheuk-Wing; Yi, Changqing

    2016-12-01

    In this study, a highly sensitive and selective fluorescent Zn(2+) probe which exhibited excellent biocompatibility, water solubility, and cell-membrane permeability, was facilely synthesized in a single step by grafting polyethyleneimine (PEI) with quinoline derivatives. The primary amino groups in the branched PEI can increase water solubility and cell permeability of the probe PEIQ, while quinoline derivatives can specifically recognize Zn(2+) and reduce the potential cytotoxicity of PEI. Basing on fluorescence off-on mechanism, PEIQ demonstrated excellent sensing capability towards Zn(2+) in absolute aqueous solution, where a high sensitivity with a detection limit as low as 38.1nM, and a high selectivity over competing metal ions and potential interfering amino acids, were achieved. Inspired by these results, elementary logic operations (YES, NOT and INHIBIT) have been constructed by employing PEIQ as the gate while Zn(2+) and EDTA as chemical inputs. Together with the low cytotoxicity and good cell-permeability, the practical application of PEIQ in living cell imaging was satisfactorily demonstrated, emphasizing its wide application in fundamental biology research. Copyright © 2016. Published by Elsevier B.V.

  9. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  10. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  11. Marine Mammal Habitat in Ecuador: Seasonal Abundance and Environmental Distribution

    DTIC Science & Technology

    2010-06-01

    derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial Undercurrent and Peru Current...is initiated by the subsurface derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial

  12. A SAR Observation and Numerical Study on Ocean Surface Imprints of Atmospheric Vortex Streets.

    PubMed

    Li, Xiaofeng; Zheng, Weizhong; Zou, Cheng-Zhi; Pichel, William G

    2008-05-21

    The sea surface imprints of Atmospheric Vortex Street (AVS) off Aleutian Volcanic Islands, Alaska were observed in two RADARSAT-1 Synthetic Aperture Radar (SAR) images separated by about 11 hours. In both images, three pairs of distinctive vortices shedding in the lee side of two volcanic mountains can be clearly seen. The length and width of the vortex street are about 60-70 km and 20 km, respectively. Although the AVS's in the two SAR images have similar shapes, the structure of vortices within the AVS is highly asymmetrical. The sea surface wind speed is estimated from the SAR images with wind direction input from Navy NOGAPS model. In this paper we present a complete MM5 model simulation of the observed AVS. The surface wind simulated from the MM5 model is in good agreement with SAR-derived wind. The vortex shedding rate calculated from the model run is about 1 hour and 50 minutes. Other basic characteristics of the AVS including propagation speed of the vortex, Strouhal and Reynolds numbers favorable for AVS generation are also derived. The wind associated with AVS modifies the cloud structure in the marine atmospheric boundary layer. The AVS cloud pattern is also observed on a MODIS visible band image taken between the two RADARSAT SAR images. An ENVISAT advance SAR image taken 4 hours after the second RADARSAT SAR image shows that the AVS has almost vanished.

  13. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  14. Integration of prior knowledge into dense image matching for video surveillance

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  15. An ice-motion tracking system at the Alaska SAR facility

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross

    1990-01-01

    An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.

  16. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  17. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  18. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  19. Beating the odds: The poisson distribution of all input cells during limiting dilution grossly underestimates whether a cell line is clonally-derived or not.

    PubMed

    Zhou, Yizhou; Shaw, David; Lam, Cynthia; Tsukuda, Joni; Yim, Mandy; Tang, Danming; Louie, Salina; Laird, Michael W; Snedecor, Brad; Misaghi, Shahram

    2017-09-23

    Establishing that a cell line was derived from a single cell progenitor and defined as clonally-derived for the production of clinical and commercial therapeutic protein drugs has been the subject of increased emphasis in cell line development (CLD). Several regulatory agencies have expressed that the prospective probability of clonality for CHO cell lines is assumed to follow the Poisson distribution based on the input cell count. The probability of obtaining monoclonal progenitors based on the Poisson distribution of all cells suggests that one round of limiting dilution may not be sufficient to assure the resulting cell lines are clonally-derived. We experimentally analyzed clonal derivatives originating from single cell cloning (SCC) via one round of limiting dilution, following our standard legacy cell line development practice. Two cell populations with stably integrated DNA spacers were mixed and subjected to SCC via limiting dilution. Cells were cultured in the presence of selection agent, screened, and ranked based on product titer. Post-SCC, the growing cell lines were screened by PCR analysis for the presence of identifying spacers. We observed that the percentage of nonclonal populations was below 9%, which is considerably lower than the determined probability based on the Poisson distribution of all cells. These results were further confirmed using fluorescence imaging of clonal derivatives originating from SCC via limiting dilution of mixed cell populations expressing GFP or RFP. Our results demonstrate that in the presence of selection agent, the Poisson distribution of all cells clearly underestimates the probability of obtaining clonally-derived cell lines. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 2017. © 2017 American Institute of Chemical Engineers.

  20. Soil C dynamics under intensive oil palm plantations in poor tropical soils

    NASA Astrophysics Data System (ADS)

    Guillaume, Thomas; Ruegg, Johanna; Quezada, Juan Carlos; Buttler, Alexandre

    2017-04-01

    Oil palm cultivation mainly takes place on heavily-weathered tropical soils where nutrients are limiting factors for plant growth and microbial activity. Intensive fertilization and changes of C input by oil palms strongly affects soil C and nutrient dynamics, challenging long-term soil fertility. Oil palm plantations management offers unique opportunities to study soil C and nutrients interactions in field conditions because 1) they can be considered as long-term litter manipulation experiments since all aboveground C inputs are concentrated in frond pile areas and 2) mineral fertilizers are only applied in specific areas, i.e. weeded circle around the tree and interrows, but not in harvest paths. Here, we determined impacts of mineral fertilizer and organic matter input on soil organic carbon dynamics and microbial activity in mature oil palm plantation established on savanna grasslands. Rates of savanna-derived soil organic carbon (SOC) decomposition and oil palm-derived SOC net stabilization were determined using changes in isotopic signature of in C input following a shift from C4 (savanna) to C3 (oil palm) vegetation. Application of mineral fertilizer alone did not affect savanna-derived SOC decomposition or oil palm-derived SOC stabilization rates, but fertilization associated with higher C input lead to an increase of oil palm-derived SOC stabilization rates, with about 50% of topsoil SOC derived from oil palm after 9 years. High carbon and nutrients inputs did not increase microbial biomass but microorganisms were more active per unit of biomass and SOC. In conclusion, soil organic matter decomposition was limited by C rather than nutrients in the studied heavily-weathered soils. Fresh C and nutrient inputs did not lead to priming of old savanna-derived SOC but increased turnover and stabilization of new oil palm-derived SOC.

  1. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  2. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  3. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays.

    PubMed

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-03-15

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target's point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment.

  4. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays

    PubMed Central

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-01-01

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target’s point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment. PMID:28294996

  5. Experimental Optoelectronic Associative Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.

  6. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  7. Characterizing the Siple Coast Ice Stream System using Satellite Images, Improved Topography, and Integrated Aerogeophysical Measurements

    NASA Technical Reports Server (NTRS)

    Scambos, Ted

    2003-01-01

    A technique for improving elevation maps of the polar ice sheets has been developed using AVHRR images. The technique is based on 'photoclinometry' or 'shape from shading', a technique used in the past for mapping planetary surfaces where little elevation information was available. The fundamental idea behind photoclinometry is using the brightness of imaged areas to infer their surface slope in the sun-illuminated direction. Our version of the method relies on a calibration of the images based on an existing lower-resolution digital elevation model (DEM), and then using the images to improve the input DEM resolution to the scale of the image data. Most current DEMs covering the ice sheets are based on Radar altimetry data, and have an inherent resolution of 10 to 25 km at best - although the grid scale of the DEM is often finer. These DEMs are highly accurate (to less than 1 meter); but they report the mean elevation of a broad area, thus erasing smaller features of glaciological interest. AVHRR image data, when accurately geolocated and calibrated, provides surface slope measurements (based on the pixel brightness under known lighting conditions) every approximately 1.1 km. The limitations of the technique are noisiness in the image data, small variations in the albedo of the snow surface, and the integration technique used to create an elevation field from the image-derived slopes. Our study applied the technique to several ice sheet areas having some elevation data; Greenland, the Amery Ice Shelf, the Institute Ice Stream, and the Siple Coast. For the latter, the input data set was laser-altimetry data collected under NSF's SOAR Facility (Support Office for Aerogeophysical Research) over the onset area of the Siple Coast. Over the course of the grant, the technique was greatly improved and modified, significantly improving accuracy and reducing noise from the images. Several publications resulted from the work, and a follow-on proposal to NASA has been submitted to apply the same method to MODIS data using ICESat and other elevation input information. This follow-on grant will explore two applications that are facilitated by the improved surface morphology characterizations of the ice sheets: accumulation and temperature variations near small undulations in the ice.

  8. Optical resonance imaging: An optical analog to MRI with sub-diffraction-limited capabilities.

    PubMed

    Allodi, Marco A; Dahlberg, Peter D; Mazuski, Richard J; Davis, Hunter C; Otto, John P; Engel, Gregory S

    2016-12-21

    We propose here optical resonance imaging (ORI), a direct optical analog to magnetic resonance imaging (MRI). The proposed pulse sequence for ORI maps space to time and recovers an image from a heterodyne-detected third-order nonlinear photon echo measurement. As opposed to traditional photon echo measurements, the third pulse in the ORI pulse sequence has significant pulse-front tilt that acts as a temporal gradient. This gradient couples space to time by stimulating the emission of a photon echo signal from different lateral spatial locations of a sample at different times, providing a widefield ultrafast microscopy. We circumvent the diffraction limit of the optics by mapping the lateral spatial coordinate of the sample with the emission time of the signal, which can be measured to high precision using interferometric heterodyne detection. This technique is thus an optical analog of MRI, where magnetic-field gradients are used to localize the spin-echo emission to a point below the diffraction limit of the radio-frequency wave used. We calculate the expected ORI signal using 15 fs pulses and 87° of pulse-front tilt, collected using f /2 optics and find a two-point resolution 275 nm using 800 nm light that satisfies the Rayleigh criterion. We also derive a general equation for resolution in optical resonance imaging that indicates that there is a possibility of superresolution imaging using this technique. The photon echo sequence also enables spectroscopic determination of the input and output energy. The technique thus correlates the input energy with the final position and energy of the exciton.

  9. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  10. The 3D modeling of high numerical aperture imaging in thin films

    NASA Technical Reports Server (NTRS)

    Flagello, D. G.; Milster, Tom

    1992-01-01

    A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.

  11. Automatic recognition of ship types from infrared images using superstructure moment invariants

    NASA Astrophysics Data System (ADS)

    Li, Heng; Wang, Xinyu

    2007-11-01

    Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.

  12. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  13. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  14. Energy-Containing Length Scale at the Base of a Coronal Hole: New Observational Findings

    NASA Astrophysics Data System (ADS)

    Abramenko, V.; Dosch, A.; Zank, G. P.; Yurchyshyn, V.; Goode, P. R.

    2012-12-01

    Dynamics of the photospheric flux tubes is thought to be a key factor for generation and propagation of MHD waves and magnetic stress into the corona. Recently, New Solar Telescope (NST, Big Bear Solar Observatory) imaging observations in helium I 10830 Å revealed ultrafine, hot magnetic loops reaching from the photosphere to the corona and originating from intense, compact magnetic field elements. One of the essential input parameters to run the models of the fast solar wind is a characteristic energy-containing length scale, lambda, of the dynamical structures transverse to the mean magnetic field in a coronal hole (CH) in the base of the corona. We used NST time series of solar granulation motions to estimate the velocity fluctuations, as well as NST near-infrared magnetograms to derive the magnetic field fluctuations. The NST adaptive optics corrected speckle-reconstructed images of 10 seconds cadence were an input for the local correlation tracking (LCT) code to derive the squared transverse velocity patterns. We found that the characteristic length scale for the energy-carrying structures in the photosphere is about 300 km, which is two orders of magnitude lower than it was adopted in previous models. The influence of the result on the coronal heating and fast solar wind modeling will be discussed.; Correlation functions calculated from the squared velocities for the three data sets: a coronal hole, quiet sun and active region plage area.

  15. Comparison of CT-derived Ventilation Maps with Deposition Patterns of Inhaled Microspheres in Rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Rick E.; Lamm, W. J.; Einstein, Daniel R.

    2015-04-01

    Purpose: Computer models for inhalation toxicology and drug-aerosol delivery studies rely on ventilation pattern inputs for predictions of particle deposition and vapor uptake. However, changes in lung mechanics due to disease can impact airflow dynamics and model results. It has been demonstrated that non-invasive, in vivo, 4DCT imaging (3D imaging at multiple time points in the breathing cycle) can be used to map heterogeneities in ventilation patterns under healthy and disease conditions. The purpose of this study was to validate ventilation patterns measured from CT imaging by exposing the same rats to an aerosol of fluorescent microspheres (FMS) and examiningmore » particle deposition patterns using cryomicrotome imaging. Materials and Methods: Six male Sprague-Dawley rats were intratracheally instilled with elastase to a single lobe to induce a heterogeneous disease. After four weeks, rats were imaged over the breathing cycle by CT then immediately exposed to an aerosol of ~1µm FMS for ~5 minutes. After the exposure, the lungs were excised and prepared for cryomicrotome imaging, where a 3D image of FMS deposition was acquired using serial sectioning. Cryomicrotome images were spatially registered to match the live CT images to facilitate direct quantitative comparisons of FMS signal intensity with the CT-based ventilation maps. Results: Comparisons of fractional ventilation in contiguous, non-overlapping, 3D regions between CT-based ventilation maps and FMS images showed strong correlations in fractional ventilation (r=0.888, p<0.0001). Conclusion: We conclude that ventilation maps derived from CT imaging are predictive of the 1µm aerosol deposition used in ventilation-perfusion heterogeneity inhalation studies.« less

  16. Assessing stream bank condition using airborne LiDAR and high spatial resolution image data in temperate semirural areas in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Johansen, Kasper; Grove, James; Denham, Robert; Phinn, Stuart

    2013-01-01

    Stream bank condition is an important physical form indicator for streams related to the environmental condition of riparian corridors. This research developed and applied an approach for mapping bank condition from airborne light detection and ranging (LiDAR) and high-spatial resolution optical image data in a temperate forest/woodland/urban environment. Field observations of bank condition were related to LiDAR and optical image-derived variables, including bank slope, plant projective cover, bank-full width, valley confinement, bank height, bank top crenulation, and ground vegetation cover. Image-based variables, showing correlation with the field measurements of stream bank condition, were used as input to a cumulative logistic regression model to estimate and map bank condition. The highest correlation was achieved between field-assessed bank condition and image-derived average bank slope (R2=0.60, n=41), ground vegetation cover (R=0.43, n=41), bank width/height ratio (R=0.41, n=41), and valley confinement (producer's accuracy=100%, n=9). Cross-validation showed an average misclassification error of 0.95 from an ordinal scale from 0 to 4 using the developed model. This approach was developed to support the remotely sensed mapping of stream bank condition for 26,000 km of streams in Victoria, Australia, from 2010 to 2012.

  17. Wavelength feature mapping as a proxy to mineral chemistry for investigating geologic systems: An example from the Rodalquilar epithermal system

    NASA Astrophysics Data System (ADS)

    van der Meer, Freek; Kopačková, Veronika; Koucká, Lucie; van der Werff, Harald M. A.; van Ruitenbeek, Frank J. A.; Bakker, Wim H.

    2018-02-01

    The final product of a geologic remote sensing data analysis using multi spectral and hyperspectral images is a mineral (abundance) map. Multispectral data, such as ASTER, Landsat, SPOT, Sentinel-2, typically allow to determine qualitative estimates of what minerals are in a pixel, while hyperspectral data allow to quantify this. As input to most image classification or spectral processing approach, endmembers are required. An alternative approach to classification is to derive absorption feature characteristics such as the wavelength position of the deepest absorption, depth of the absorption and symmetry of the absorption feature from hyperspectral data. Two approaches are presented, tested and compared in this paper: the 'Wavelength Mapper' and the 'QuanTools'. Although these algorithms use a different mathematical solution to derive absorption feature wavelength and depth, and use different image post-processing, the results are consistent, comparable and reproducible. The wavelength images can be directly linked to mineral type and abundance, but more importantly also to mineral chemical composition and subtle changes thereof. This in turn allows to interpret hyperspectral data in terms of mineral chemistry changes which is a proxy to pressure-temperature of formation of minerals. We show the case of the Rodalquilar epithermal system of the southern Spanish Gabo de Gata volcanic area using HyMAP airborne hyperspectral images.

  18. Optoelectronic associative memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1993-01-01

    An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.

  19. Assessing the Utility of Uav-Borne Hyperspectral Image and Photogrammetry Derived 3d Data for Wetland Species Distribution Quick Mapping

    NASA Astrophysics Data System (ADS)

    Li, Q. S.; Wong, F. K. K.; Fung, T.

    2017-08-01

    Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.

  20. Integration of low level and ontology derived features for automatic weapon recognition and identification

    NASA Astrophysics Data System (ADS)

    Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore

    2011-06-01

    This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.

  1. Towards automatic lithological classification from remote sensing data using support vector machines

    NASA Astrophysics Data System (ADS)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14 principal component bands, 14 independent component bands, 3 band ratios, 3 DEM derivatives: slope/curvatureroughness and 2 aeromagnetic derivatives: mean and variance of susceptibility) extracted from the ASTER, DEM and aeromagnetic data, in order to determine the optimal inputs that provide the highest classification accuracy. It was found that a combination of ASTER-derived independent components, principal components and band ratios, DEM-derived slope, curvature and roughness, and aeromagnetic-derived mean and variance of magnetic susceptibility provide the highest classification accuracy of 93.4% on independent test samples. A comparison of the classification results of the SVM with those of maximum likelihood (84.9%) and minimum distance (38.4%) classifiers clearly show that the SVM algorithm returns much higher classification accuracy. Therefore, the SVM method can be used to produce quick and reliable geological maps from scarce geological information, which is still the case with many under-developed frontier regions of the world.

  2. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    PubMed

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  3. Improving the accuracy in detection of clustered microcalcifications with a context-sensitive classification model.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2016-01-01

    In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.

  4. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  5. Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

    NASA Astrophysics Data System (ADS)

    Hancock, Matthew C.; Magnan, Jerry F.

    2017-03-01

    To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  6. Estimation of crown closure from AVIRIS data using regression analysis

    NASA Technical Reports Server (NTRS)

    Staenz, K.; Williams, D. J.; Truchon, M.; Fritz, R.

    1993-01-01

    Crown closure is one of the input parameters used for forest growth and yield modelling. Preliminary work by Staenz et al. indicates that imaging spectrometer data acquired with sensors such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) have some potential for estimating crown closure on a stand level. The objectives of this paper are: (1) to establish a relationship between AVIRIS data and the crown closure derived from aerial photography of a forested test site within the Interior Douglas Fir biogeoclimatic zone in British Columbia, Canada; (2) to investigate the impact of atmospheric effects and the forest background on the correlation between AVIRIS data and crown closure estimates; and (3) to improve this relationship using multiple regression analysis.

  7. Multitask visual learning using genetic programming.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz

    2008-01-01

    We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.

  8. Modeling UV-B Effects on Primary Production Throughout the Southern Ocean Using Multi-Sensor Satellite Data

    NASA Technical Reports Server (NTRS)

    Lubin, Dan

    2001-01-01

    This study has used a combination of ocean color, backscattered ultraviolet, and passive microwave satellite data to investigate the impact of the springtime Antarctic ozone depletion on the base of the Antarctic marine food web - primary production by phytoplankton. Spectral ultraviolet (UV) radiation fields derived from the satellite data are propagated into the water column where they force physiologically-based numerical models of phytoplankton growth. This large-scale study has been divided into two components: (1) the use of Total Ozone Mapping Spectrometer (TOMS) and Special Sensor Microwave Imager (SSM/I) data in conjunction with radiative transfer theory to derive the surface spectral UV irradiance throughout the Southern Ocean; and (2) the merging of these UV irradiances with the climatology of chlorophyll derived from SeaWiFS data to specify the input data for the physiological models.

  9. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  10. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  11. Using LiDAR and quickbird data to model plant production and quantify uncertainties associated with wetland detection and land cover generalizations

    USGS Publications Warehouse

    Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  12. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    NASA Technical Reports Server (NTRS)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  13. Ultrasound breast imaging using frequency domain reverse time migration

    NASA Astrophysics Data System (ADS)

    Roy, O.; Zuberi, M. A. H.; Pratt, R. G.; Duric, N.

    2016-04-01

    Conventional ultrasonography reconstruction techniques, such as B-mode, are based on a simple wave propagation model derived from a high frequency approximation. Therefore, to minimize model mismatch, the central frequency of the input pulse is typically chosen between 3 and 15 megahertz. Despite the increase in theoretical resolution, operating at higher frequencies comes at the cost of lower signal-to-noise ratio. This ultimately degrades the image contrast and overall quality at higher imaging depths. To address this issue, we investigate a reflection imaging technique, known as reverse time migration, which uses a more accurate propagation model for reconstruction. We present preliminary simulation results as well as physical phantom image reconstructions obtained using data acquired with a breast imaging ultrasound tomography prototype. The original reconstructions are filtered to remove low-wavenumber artifacts that arise due to the inclusion of the direct arrivals. We demonstrate the advantage of using an accurate sound speed model in the reverse time migration process. We also explain how the increase in computational complexity can be mitigated using a frequency domain approach and a parallel computing platform.

  14. Restoration of STORM images from sparse subset of localizations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.

    2016-02-01

    To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.

  15. Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc

    2010-05-01

    Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (< 30 m) they have been considered as an evaluation tool. The snow cover maps are then compared with the hydrological GEOtop model outputs. The main objectives of this work are: 1. Evaluation of the MODIS snow cover algorithm using LANDSAT data 2. Investigation of snow cover, and snow cover duration for the area of interest for South Tyrol 3. Derivation and interpretation of the snow line for the seven winter seasons 4. An evaluation of the model outputs in order to determine the situations in which the remotely sensed data can be used to improve the model prediction of snow coverage and related variables References [1] Rigon R., Bertoldi G. and Over T.M. 2006. GEOtop: A Distributed Hydrological Model with Coupled Water and Energy Budgets, Journal of Hydrometeorology, 7: 371-388. [2] Rastner P., Irsara L., Schellenberger T., Della Chiesa S., Bertoldi G., Endrizzi S., Notarnicola C., Steurer C., Zebisch M. 2009. Monitoraggio del manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.

  16. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  17. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  18. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  19. Input design for identification of aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Hall, W. E., Jr.

    1975-01-01

    An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.

  20. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    PubMed

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  1. Automated Movement Correction for Dynamic PET/CT Images: Evaluation with Phantom and Patient Data

    PubMed Central

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R.; Nelson, Linda D.; Small, Gary W.; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers. PMID:25111700

  2. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  3. Digital data from the Questa-San Luis and Santa Fe East helicopter magnetic surveys in Santa Fe and Taos Counties, New Mexico, and Costilla County, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,

    2006-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  4. Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging

    NASA Astrophysics Data System (ADS)

    Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.

    2008-12-01

    Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.

  5. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  6. Using aerial images for establishing a workflow for the quantification of water management measures

    NASA Astrophysics Data System (ADS)

    Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg

    2017-04-01

    Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.

  7. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  8. 40 CFR 60.44c - Compliance and performance test methods and procedures for sulfur dioxide.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = Fraction of the total heat input from fuel combustion derived from coal and oil, as determined by... total heat input from fuel combustion derived from coal and oil, as determined by applicable procedures... generating unit load during the 30-day period does not have to be the maximum design heat input capacity, but...

  9. Digital Data from the Great Sand Dunes and Poncha Springs Aeromagnetic Surveys, South-Central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.

    2009-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  10. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  11. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  12. Flight data identification of six degree-of-freedom stability and control derivatives of a large crane type helicopter

    NASA Technical Reports Server (NTRS)

    Tomaine, R. L.

    1976-01-01

    Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.

  13. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  14. Measured Polarized Spectral Responsivity of JPSS J1 VIIRS Using the NIST T-SIRCUS

    NASA Technical Reports Server (NTRS)

    McIntire, Jeff; Young, James B.; Moyer, David; Waluschka, Eugene; Xiong, Xiaoxiong

    2015-01-01

    Recent pre-launch measurements performed on the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) using the National Institute of Standards and Technology (NIST) Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources (T-SIRCUS) monochromatic source have provided wavelength dependent polarization sensitivity for select spectral bands and viewing conditions. Measurements were made at a number of input linear polarization states (twelve in total) and initially at thirteen wavelengths across the bandpass (later expanded to seventeen for some cases). Using the source radiance information collected by an external monitor, a spectral responsivity function was constructed for each input linear polarization state. Additionally, an unpolarized spectral responsivity function was derived from these polarized measurements. An investigation of how the centroid, bandwidth, and detector responsivity vary with polarization state was weighted by two model input spectra to simulate both ground measurements as well as expected on-orbit conditions. These measurements will enhance our understanding of VIIRS polarization sensitivity, improve the design for future flight models, and provide valuable data to enhance product quality in the post-launch phase.

  15. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  16. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  17. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  18. A comprehensive computational model of sound transmission through the porcine lung

    PubMed Central

    Dai, Zoujun; Peng, Ying; Henry, Brian M.; Mansy, Hansen A.; Sandler, Richard H.; Royston, Thomas J.

    2014-01-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This “subject-specific” model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment. PMID:25190415

  19. A comprehensive computational model of sound transmission through the porcine lung.

    PubMed

    Dai, Zoujun; Peng, Ying; Henry, Brian M; Mansy, Hansen A; Sandler, Richard H; Royston, Thomas J

    2014-09-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This "subject-specific" model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment.

  20. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  1. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  2. Morphological-transformation-based technique of edge detection and skeletonization of an image using a single spatial light modulator

    NASA Astrophysics Data System (ADS)

    Munshi, Soumika; Datta, A. K.

    2003-03-01

    A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.

  3. Urban area delineation and detection of change along the urban-rural boundary as derived from LANDSAT digital data

    NASA Technical Reports Server (NTRS)

    Christenson, J. W.; Lachowski, H. M.

    1977-01-01

    LANDSAT digital multispectral scanner data, in conjunction with supporting ground truth, were investigated to determine their utility in delineation of urban-rural boundaries. The digital data for the metropolitan areas of Washington, D. C.; Austin, Texas; and Seattle, Washingtion; were processed using an interactive image processing system. Processing focused on identification of major land cover types typical of the zone of transition from urban to rural landscape, and definition of their spectral signatures. Census tract boundaries were input into the interactive image processing system along with the LANDSAT single and overlayed multiple date MSS data. Results of this investigation indicate that satellite collected information has a practical application to the problem of urban area delineation and to change detection.

  4. An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhan, Z. Q.; Heipke, C.

    2017-05-01

    Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.

  5. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  6. Mid-space-independent deformable image registration.

    PubMed

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-05-15

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Mid-Space-Independent Deformable Image Registration

    PubMed Central

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-01-01

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric – that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. PMID:28242316

  8. Design of a 3T preamplifier which stability is insensitive to coil loading

    NASA Astrophysics Data System (ADS)

    Cao, Xueming; Fischer, Elmar; Korvink, Jan G.; Gruschke, Oliver; Hennig, Jürgen; Zaitsev, Maxim

    2016-04-01

    In MRI (magnetic resonance imaging), preamplifiers are needed to amplify signals obtained from MRI receiver coils. Under various loading conditions of the corresponding receiver coils, preamplifiers see different source impedance at their input and may become unstable. Therefore preamplifiers which stability is not sensitive to coil loading are desirable. In this article, a coil-loading-insensitive preamplifier for MRI is presented, derived from an unstable preamplifier. Different approaches to improve stability were used during this derivation. Since a very low noise factor is essential for MRI preamplifiers, noise contributions from passive components in the MRI preamplifier have to be considered during the stabilization process. As a result, the initially unstable preamplifier became stable with regard to coil loading, while other MRI requirements, as the extremely low noise factor, were still fulfilled. The newly designed preamplifier was manufactured, characterized and tested in the MRI spectrometer. Compared to a commercially available preamplifier, the newly designed preamplifier has similar imaging performance but other advantages like smaller size and better stability. Furthermore, presented stabilization approaches can be generalized to stabilize other unstable low-noise amplifiers.

  9. Team Electronic Gameplay Combining Different Means of Control

    NASA Technical Reports Server (NTRS)

    Palsson, Olafur S. (Inventor); Pope, Alan T. (Inventor)

    2014-01-01

    Disclosed are methods and apparatuses provided for modifying the effect of an operator controlled input device on an interactive device to encourage the self-regulation of at least one physiological activity by a person different than the operator. The interactive device comprises a display area which depicts images and apparatus for receiving at least one input from the operator controlled input device to thus permit the operator to control and interact with at least some of the depicted images. One effect modification comprises measurement of the physiological activity of a person different from the operator, while modifying the ability of the operator to control and interact with at least some of the depicted images by modifying the input from the operator controlled input device in response to changes in the measured physiological signal.

  10. Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.; Howard, Andrew B.; Aghazarian, Hrand; Rankin, Arturo L.

    2012-01-01

    PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning.

  11. Demonstration of the reproducibility of free-breathing diffusion-weighted MRI and dynamic contrast enhanced MRI in children with solid tumours: a pilot study.

    PubMed

    Miyazaki, Keiko; Jerome, Neil P; Collins, David J; Orton, Matthew R; d'Arcy, James A; Wallace, Toni; Moreno, Lucas; Pearson, Andrew D J; Marshall, Lynley V; Carceller, Fernando; Leach, Martin O; Zacharoulis, Stergios; Koh, Dow-Mu

    2015-09-01

    The objectives are to examine the reproducibility of functional MR imaging in children with solid tumours using quantitative parameters derived from diffusion-weighted (DW-) and dynamic contrast enhanced (DCE-) MRI. Patients under 16-years-of age with confirmed diagnosis of solid tumours (n = 17) underwent free-breathing DW-MRI and DCE-MRI on a 1.5 T system, repeated 24 hours later. DW-MRI (6 b-values, 0-1000 sec/mm(2)) enabled monoexponential apparent diffusion coefficient estimation using all (ADC0-1000) and only ≥100 sec/mm(2) (ADC100-1000) b-values. DCE-MRI was used to derive the transfer constant (K(trans)), the efflux constant (kep), the extracellular extravascular volume (ve), and the plasma fraction (vp), using a study cohort arterial input function (AIF) and the extended Tofts model. Initial area under the gadolinium enhancement curve and pre-contrast T1 were also calculated. Percentage coefficients of variation (CV) of all parameters were calculated. The most reproducible cohort parameters were ADC100-1000 (CV = 3.26%), pre-contrast T1 (CV = 6.21%), and K(trans) (CV = 15.23%). The ADC100-1000 was more reproducible than ADC0-1000, especially extracranially (CV = 2.40% vs. 2.78%). The AIF (n = 9) derived from this paediatric population exhibited sharper and earlier first-pass and recirculation peaks compared with the literature's adult population average. Free-breathing functional imaging protocols including DW-MRI and DCE-MRI are well-tolerated in children aged 6 - 15 with good to moderate measurement reproducibility. • Diffusion MRI protocol is feasible and well-tolerated in a paediatric oncology population. • DCE-MRI for pharmacokinetic evaluation is feasible and well tolerated in a paediatric oncology population. • Paediatric arterial input function (AIF) shows systematic differences from the adult population-average AIF. • Variation of quantitative parameters from paired functional MRI measurements were within 20%.

  12. Image-derived input function in PET brain studies: blood-based methods are resistant to motion artifacts.

    PubMed

    Zanotti-Fregonara, Paolo; Liow, Jeih-San; Comtat, Claude; Zoghbi, Sami S; Zhang, Yi; Pike, Victor W; Fujita, Masahiro; Innis, Robert B

    2012-09-01

    Image-derived input function (IDIF) from carotid arteries is an elegant alternative to full arterial blood sampling for brain PET studies. However, a recent study using blood-free IDIFs found that this method is particularly vulnerable to patient motion. The present study used both simulated and clinical [11C](R)-rolipram data to assess the robustness of a blood-based IDIF method (a method that is ultimately normalized with blood samples) with regard to motion artifacts. The impact of motion on the accuracy of IDIF was first assessed with an analytical simulation of a high-resolution research tomograph using a numerical phantom of the human brain, equipped with internal carotids. Different degrees of translational (from 1 to 20 mm) and rotational (from 1 to 15°) motions were tested. The impact of motion was then tested on the high-resolution research tomograph dynamic scans of three healthy volunteers, reconstructed with and without an online motion correction system. IDIFs and Logan-distribution volume (VT) values derived from simulated and clinical scans with motion were compared with those obtained from the scans with motion correction. In the phantom scans, the difference in the area under the curve (AUC) for the carotid time-activity curves was up to 19% for rotations and up to 66% for translations compared with the motionless simulation. However, for the final IDIFs, which were fitted to blood samples, the AUC difference was 11% for rotations and 8% for translations. Logan-VT errors were always less than 10%, except for the maximum translation of 20 mm, in which the error was 18%. Errors in the clinical scans without motion correction appeared to be minor, with differences in AUC and Logan-VT always less than 10% compared with scans with motion correction. When a blood-based IDIF method is used for neurological PET studies, the motion of the patient affects IDIF estimation and kinetic modeling only minimally.

  13. Effects of spatial resolution ratio in image fusion

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2008-01-01

    In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

  14. Comparative performance analysis of cervix ROI extraction and specular reflection removal algorithms for uterine cervix image analysis

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Jeronimo, Jose; Thoma, George R.

    2007-03-01

    Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120 images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image. Empirical results are analyzed to derive conclusions on the performance of each approach.

  15. Correlation of iodine uptake and perfusion parameters between dual-energy CT imaging and first-pass dual-input perfusion CT in lung cancer.

    PubMed

    Chen, Xiaoliang; Xu, Yanyan; Duan, Jianghui; Li, Chuandong; Sun, Hongliang; Wang, Wu

    2017-07-01

    To investigate the potential relationship between perfusion parameters from first-pass dual-input perfusion computed tomography (DI-PCT) and iodine uptake levels estimated from dual-energy CT (DE-CT).The pre-experimental part of this study included a dynamic DE-CT protocol in 15 patients to evaluate peak arterial enhancement of lung cancer based on time-attenuation curves, and the scan time of DE-CT was determined. In the prospective part of the study, 28 lung cancer patients underwent whole-volume perfusion CT and single-source DE-CT using 320-row CT. Pulmonary flow (PF, mL/min/100 mL), aortic flow (AF, mL/min/100 mL), and a perfusion index (PI = PF/[PF + AF]) were automatically generated by in-house commercial software using the dual-input maximum slope method for DI-PCT. For the dual-energy CT data, iodine uptake was estimated by the difference (λ) and the slope (λHU). λ was defined as the difference of CT values between 40 and 70 KeV monochromatic images in lung lesions. λHU was calculated by the following equation: λHU = |λ/(70 - 40)|. The DI-PCT and DE-CT parameters were analyzed by Pearson/Spearman correlation analysis, respectively.All subjects were pathologically proved as lung cancer patients (including 16 squamous cell carcinoma, 8 adenocarcinoma, and 4 small cell lung cancer) by surgery or CT-guided biopsy. Interobserver reproducibility in DI-PCT (PF, AF, PI) and DE-CT (λ, λHU) were relatively good to excellent (intraclass correlation coefficient [ICC]Inter = 0.8726-0.9255, ICCInter = 0.8179-0.8842; ICCInter = 0.8881-0.9177, ICCInter = 0.9820-0.9970, ICCInter = 0.9780-0.9971, respectively). Correlation coefficient between λ and AF, and PF were as follows: 0.589 (P < .01) and 0.383 (P < .05). Correlation coefficient between λHU and AF, and PF were as follows: 0.564 (P < .01) and 0.388 (P < .05).Both the single-source DE-CT and dual-input CT perfusion analysis method can be applied to assess blood supply of lung cancer patients. Preliminary results demonstrated that the iodine uptake relevant parameters derived from DE-CT significantly correlated with perfusion parameters derived from DI-PCT.

  16. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.

  17. Combining Vegetation Index Derived from PhenoCam with EVI to Estimate Daily GPP in Semi-arid Grassland

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jia, G.

    2017-12-01

    Accurate estimation of temporal continuous gross primary production (GPP) plays an important role in mechanistic understanding of global carbon budget and exchange between atmosphere and terrestrial ecosystems. Ground based PhenoCam can provide near surface observations of plant phenology with high temporal resolution and have great potential in modeling seasonal dynamics of GPP. However, due to the empirical approaches for estimating fAPAR, there still exist some uncertainties of adopting PhenoCam images in GPP modeling. In this study, we combined excess green index (EGI) derived from PhenoCam and EVI retrieved from MODIS to generate daily time-series of fAPAR (fAPARcam), and then to estimate daily GPP (GPPpre) with a light use efficiency model in semi-arid grassland from 2012 to 2014. Among the three continuous years, daily fAPARcam exhibited similar temporal behaviors with eddy covariance observed GPP (GPPobs). The overall determination coefficients (R2) were all greater than 0.81. GPPpre agreed well with GPPobs and these agreements showed highly statistically significant (p <0.01). R2 ranged from 0.80 to 0.87, RE ranged from -2.9% to 2.81% and RMSE ranged from 0.83 (gC/m2d-1) to 0.98 (gC/m2d-1). GPPpre was then further evaluated by comparing with MODIS GPP products and VPM modeled GPP. Validation showed the variance explained by GPPpre is still the highest. RMSE and RE were also lower than the other two in general. Explanatory power of inputs in GPP modeling was also explored: fAPAR is the most influential input and PAR takes the second place. Contributions of Tscalar and Wscalar are lower than PAR. These results highlight the potential of PhenoCam images in high temporal resolution GPP modeling. Our GPP modeling method will help to reduce uncertainties of using PhenoCam images in monitoring of seasonal development of vegetation production.

  18. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  19. A text input system developed by using lips image recognition based LabVIEW for the seriously disabled.

    PubMed

    Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M

    2004-01-01

    In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.

  20. Diagnostic accuracy of dynamic contrast-enhanced MR imaging using a phase-derived vascular input function in the preoperative grading of gliomas.

    PubMed

    Nguyen, T B; Cron, G O; Mercier, J F; Foottit, C; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Caudrelier, J M; Sinclair, J; Hogan, M J; Thornhill, R E; Cameron, I G

    2012-09-01

    The accuracy of tumor plasma volume and K(trans) estimates obtained with DCE MR imaging may have inaccuracies introduced by a poor estimation of the VIF. In this study, we evaluated the diagnostic accuracy of a novel technique by using a phase-derived VIF and "bookend" T1 measurements in the preoperative grading of patients with suspected gliomas. This prospective study included 46 patients with a new pathologically confirmed diagnosis of glioma. Both magnitude and phase images were acquired during DCE MR imaging for estimates of K(trans)_φ and V(p_)φ (calculated from a phase-derived VIF and bookend T1 measurements) as well as K(trans)_SI and V(p_)SI (calculated from a magnitude-derived VIF without T1 measurements). Median K(trans)_φ values were 0.0041 minutes(-1) (95 CI, 0.00062-0.033), 0.031 minutes(-1) (0.011-0.150), and 0.088 minutes(-1) (0.069-0.110) for grade II, III, and IV gliomas, respectively (P ≤ .05 for each). Median V(p_)φ values were 0.64 mL/100 g (0.06-1.40), 0.98 mL/100 g (0.34-2.20), and 2.16 mL/100 g (1.8-3.1) with P = .15 between grade II and III gliomas and P = .015 between grade III and IV gliomas. In differentiating low-grade from high-grade gliomas, AUCs for K(trans)_φ, V(p_φ), K(trans)_SI, and V(p_)SI were 0.87 (0.73-1), 0.84 (0.69-0.98), 0.81 (0.59-1), and 0.84 (0.66-0.91). The differences between the AUCs were not statistically significant. K(trans)_φ and V(p_)φ are parameters that can help in differentiating low-grade from high-grade gliomas.

  1. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  2. Generating a National Land Cover Dataset for Mexico at 30m Spatial Resolution in the Framework of the NALCMS Project.

    NASA Astrophysics Data System (ADS)

    Llamas, R. M.; Colditz, R. R.; Ressl, R.; Jurado Cruz, D. A.; Argumedo, J.; Victoria, A.; Meneses, C.

    2017-12-01

    The North American Land Change Monitoring System (NALCMS) is a tri-national initiative for mapping land cover across Mexico, United States and Canada, integrating efforts of institutions from the three countries. At the continental scale the group released land cover and change maps derived from MODIS image mosaics at 250m spatial resolution for 2005 and 2010. Current efforts are based on 30m Landsat images for 2010 ± 1 year. Each country uses its own mapping approach and sources for ancillary data, while ensuring that maps are produced in a coherent fashion across the continent. This paper presents the methodology and final land cover map of Mexico for the year 2010 that was later integrated into a continental map. The principal input for Mexico was the Monitoring Activity Data for Mexico (MAD-MEX) land cover map (version 4.3), derived from all available mostly cloud-free images for the year 2010. A total of 35 classes were regrouped to 15 classes of the NALCMS legend present in Mexico. Next, various issues of the automatically generated MAD-MEX land cover mosaic were corrected, such as: filling areas of no data due no cloud-free observation or gaps in Landsat 7 ETM+ images, filling inland water bodies which were left unclassified due to masking issues, relabeling isolated unclassified of falsely classified pixels, structural mislabeling due to data gaps, reclassifying areas of adjacent scenes with significant class disagreements and correcting obvious misclassifications, mostly of water and urban areas. In a second step minor missing areas and rare class snow and ice were digitized and a road network was added. A product such as NALCMS land cover map at 30m for North America is an unprecedented effort and will be without doubt an important source of information for many users around the world who need coherent land cover data over a continental domain as an input for a wide variety of environmental studies. The product release to the general public is expected by late summer of 2017 and will be made available through the Commission for Environmental Cooperation (CEC) at www.cec.org

  3. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    NASA Astrophysics Data System (ADS)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  4. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  5. Sea surface velocities from visible and infrared multispectral atmospheric mapping sensor imagery

    NASA Technical Reports Server (NTRS)

    Pope, P. A.; Emery, W. J.; Radebaugh, M.

    1992-01-01

    High resolution (100 m), sequential Multispectral Atmospheric Mapping Sensor (MAMS) images were used in a study to calculate advective surface velocities using the Maximum Cross Correlation (MCC) technique. Radiance and brightness temperature gradient magnitude images were formed from visible (0.48 microns) and infrared (11.12 microns) image pairs, respectively, of Chandeleur Sound, which is a shallow body of water northeast of the Mississippi delta, at 145546 GMT and 170701 GMT on 30 Mar. 1989. The gradient magnitude images enhanced the surface water feature boundaries, and a lower cutoff on the gradient magnitudes calculated allowed the undesirable sunglare and backscatter gradients in the visible images, and the water vapor absorption gradients in the infrared images, to be reduced in strength. Requiring high (greater than 0.4) maximum cross correlation coefficients and spatial coherence of the vector field aided in the selection of an optimal template size of 10 x 10 pixels (first image) and search limit of 20 pixels (second image) to use in the MCC technique. Use of these optimum input parameters to the MCC algorithm, and high correlation and spatial coherence filtering of the resulting velocity field from the MCC calculation yielded a clustered velocity distribution over the visible and infrared gradient images. The velocity field calculated from the visible gradient image pair agreed well with a subjective analysis of the motion, but the velocity field from the infrared gradient image pair did not. This was attributed to the changing shapes of the gradient features, their nonuniqueness, and large displacements relative to the mean distance between them. These problems implied a lower repeat time for the imagery was needed in order to improve the velocity field derived from gradient imagery. Suggestions are given for optimizing the repeat time of sequential imagery when using the MCC method for motion studies. Applying the MCC method to the infrared brightness temperature imagery yielded a velocity field which did agree with the subjective analysis of the motion and that derived from the visible gradient imagery. Differences between the visible and infrared derived velocities were 14.9 cm/s in speed and 56.7 degrees in direction. Both of these velocity fields also agreed well with the motion expected from considerations of the ocean bottom topography and wind and tidal forcing in the study area during the 2.175 hour time interval.

  6. Cascaded analysis of signal and noise propagation through a heterogeneous breast model.

    PubMed

    Mainprize, James G; Yaffe, Martin J

    2010-10-01

    The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.

  7. Automated inundation monitoring using TerraSAR-X multitemporal imagery

    NASA Astrophysics Data System (ADS)

    Gebhardt, S.; Huth, J.; Wehrmann, T.; Schettler, I.; Künzer, C.; Schmidt, M.; Dech, S.

    2009-04-01

    The Mekong Delta in Vietnam offers natural resources for several million inhabitants. However, a strong population increase, changing climatic conditions and regulatory measures at the upper reaches of the Mekong lead to severe changes in the Delta. Extreme flood events occur more frequently, drinking water availability is increasingly limited, soils show signs of salinization or acidification, species and complete habitats diminish. During the Monsoon season the river regularly overflows its banks in the lower Mekong area, usually with beneficial effects. However, extreme flood events occur more frequently causing extensive damage, on the average once every 6 to 10 years river flood levels exceed the critical beneficial level X-band SAR data are well suited for deriving inundated surface areas. The TerraSAR-X sensor with its different scanning modi allows for the derivation of spatial and temporal high resolved inundation masks. The paper presents an automated procedure for deriving inundated areas from TerraSAR-X Scansar and Stripmap image data. Within the framework of the German-Vietnamese WISDOM project, focussing the Mekong Delta region in Vietnam, images have been acquired covering the flood season from June 2008 to November 2008. Based on these images a time series of the so called watermask showing inundated areas have been derived. The product is required as intermediate to (i) calibrate 2d inundation model scenarios, (ii) estimate the extent of affected areas, and (iii) analyze the scope of prior crisis. The image processing approach is based on the assumption that water surfaces are forward scattering the radar signal resulting in low backscatter signals to the sensor. It uses multiple grey level thresholds and image morphological operations. The approach is robust in terms of automation, accuracy, robustness, and processing time. The resulting watermasks show the seasonal flooding pattern with inundations starting in July, having their peak at the end of September, and lower down until December in 2008. The results are a valuable input for monitoring and understanding the seasonal regional flood patterns for calibrating 2d inundation models, as also for generating value added products in combination with agricultural land use and socio-economic data for further separation of inundated and irrigated areas.

  8. Observation and simulation of net primary productivity in Qilian Mountain, western China.

    PubMed

    Zhou, Y; Zhu, Q; Chen, J M; Wang, Y Q; Liu, J; Sun, R; Tang, S

    2007-11-01

    We modeled net primary productivity (NPP) at high spatial resolution using an advanced spaceborne thermal emission and reflection radiometer (ASTER) image of a Qilian Mountain study area using the boreal ecosystem productivity simulator (BEPS). Two key driving variables of the model, leaf area index (LAI) and land cover type, were derived from ASTER and moderate resolution imaging spectroradiometer (MODIS) data. Other spatially explicit inputs included daily meteorological data (radiation, precipitation, temperature, humidity), available soil water holding capacity (AWC), and forest biomass. NPP was estimated for coniferous forests and other land cover types in the study area. The result showed that NPP of coniferous forests in the study area was about 4.4 tCha(-1)y(-1). The correlation coefficient between the modeled NPP and ground measurements was 0.84, with a mean relative error of about 13.9%.

  9. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array.

    PubMed

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-03-11

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.

  10. Object knowledge changes visual appearance: semantic effects on color afterimages.

    PubMed

    Lupyan, Gary

    2015-10-01

    According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  12. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  13. Experimental image alignment system

    NASA Technical Reports Server (NTRS)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  14. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE PAGES

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-10-27

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  15. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  16. Carbon nanotube thin film strain sensor models assembled using nano- and micro-scale imaging

    NASA Astrophysics Data System (ADS)

    Lee, Bo Mi; Loh, Kenneth J.; Yang, Yuan-Sen

    2017-07-01

    Nanomaterial-based thin films, particularly those based on carbon nanotubes (CNT), have brought forth tremendous opportunities for designing next-generation strain sensors. However, their strain sensing properties can vary depending on fabrication method, post-processing treatment, and types of CNTs and polymers employed. The objective of this study was to derive a CNT-based thin film strain sensor model using inputs from nano-/micro-scale experimental measurements of nanotube physical properties. This study began with fabricating ultra-low-concentration CNT-polymer thin films, followed by imaging them using atomic force microscopy. Image processing was employed for characterizing CNT dispersed shapes, lengths, and other physical attributes, and results were used for building five different types of thin film percolation-based models. Numerical simulations were conducted to assess how the morphology of dispersed CNTs in its 2D matrix affected bulk film electrical and electromechanical (strain sensing) properties. The simulation results showed that CNT morphology had a significant impact on strain sensing performance.

  17. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Cross-Validation of Suspended Sediment Concentrations Derived from Satellite Imagery and Numerical Modeling of the 1997 New Year's Flood on the Feather River, CA

    NASA Astrophysics Data System (ADS)

    Kilham, N. E.

    2009-12-01

    Image analysis was applied to assess suspended sediment concentrations (SSC) predicted by a numerical model of 2D hydraulics and sediment transport (Telemac-2D), coupled to a solver for the advection-diffusion equation (SISYPHE) and representing 18 days of flooding over 70 kilometers of the lower Feather-Yuba Rivers. Sisyphe treats the suspended load as a tracer, removed from the flow if the bed shear velocity, u* is lower than an empirically derived threshold (ud* = 7.8E-3 m s-1). Agreement between model (D50 = 0.03 mm) and image-derived SSC (mg L-1) suggests that image interpretation could prove to be a viable approach for verifying spatially-distributed models of floodplain sediment transport if imagery is acquired for a particular flood and at a sufficient spatial and radiometric resolution. However, remotely derived SSC represents the integrated concentration of suspended sediment at the water surface. Hence, comparing SSC magnitudes derived from imagery and numerical modeling requires that a relationship is first established between the total suspended load and the portion of this load suspended within the optical range of the sensor (e.g., Aalto, 1995). Using the optical depth (0.5 m) determined from radiative transfer modeling, surface SSC measured from a 1/14/97 Landsat TM5 image (30 m) were converted to depth-integrated SSC with the Rouse (1937) equation. Surface concentrations were derived using a look-up table for the sensor to convert endmember fractions obtained from a spectral mixture analysis of the image. A two-endmember model (2.0 and 203 mg L-1) was used, with synthetic endmembers derived from optical and radiative transfer modeling and inversion of field spectra collected from the Sacramento and Feather Rivers and matched to measured SSC values. Remotely sensed SSC patterns were then compared to the Telemac results for the same day and time. Modeled concentrations are a function of both the rating curve boundary conditions, and the transport and deposition calculations. At each of three upstream channel boundaries, hourly SSC was derived from instantaneous discharge and SSC records at USGS gages for winter months (December-April) following dam closure on the Feather, Yuba, and Bear Rivers (r2 = 0.61; r2 = 0.81; r2 = 0.55). Model channel concentrations declined downstream from about 90 mg L-1 to 40 mg L-1 as sediment input was depleted through decanting of river water overbank, advection through floodplain channels, and deposition onto the floodplain. Similar downstream declines in the image values suggest that bed and bank erosion downstream of the major gages did not contribute much new sediment two weeks following the flood peak. Model predicted concentrations agree with image derived concentrations to within 10 mg L-1, although the model predicts a more rapid drawdown of floodplain flow than is apparent from the image. Aalto, R., 1995. Discordance between suspended sediment diffusion theory and observed sediment concentration profiles in rivers. M.S., University of Washington, Seattle, WA. Rouse, H.R., 1937. Modern conceptions of the mechanics of turbulence. Transactions, American Society of Civil Engineers, 102: 463-543.

  19. A comparison of individual and population-derived vascular input functions for quantitative DCE-MRI in rats.

    PubMed

    Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E

    2014-05-01

    Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  1. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  2. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  3. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  4. A Synthesis of Star Calibration Techniques for Ground-Based Narrowband Electron-Multiplying Charge-Coupled Device Imagers Used in Auroral Photometry

    NASA Technical Reports Server (NTRS)

    Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha

    2016-01-01

    A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.

  5. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  6. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  7. On the influence of noise correlations in measurement data on basis image noise in dual-energylike x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roessl, Ewald; Ziegler, Andy; Proksa, Roland

    2007-03-15

    In conventional dual-energy systems, two transmission measurements with distinct spectral characteristics are performed. These measurements are used to obtain the line integrals of two basis material densities. Usually, the measurement process is such that the two measured signals can be treated as independent and therefore uncorrelated. Recently, however, a readout system for x-ray detectors has been introduced for which this is no longer the case. The readout electronics is designed to obtain simultaneous measurements of the total number of photons N and the total energy E they deposit in the sensor material. Practically, this is realized by a signal replicationmore » and separate counting and integrating processing units. Since the quantities N and E are (electronically) derived from one and the same physical sensor signal, they are statistically correlated. Nevertheless, the pair N and E can be used to perform a dual-energy processing following the well-known approach by Alvarez and Macovski. Formally, this means that N is to be identified with the first dual-energy measurement M{sub 1} and E with the second measurement M{sub 2}. In the presence of input correlations between M{sub 1}=N and M{sub 2}=E, however, the corresponding analytic expressions for the basis image noise have to be modified. The main observation made in this paper is that for positively correlated data, as is the case for the simultaneous counting and integrating device mentioned above, the basis image noise is suppressed through the influence of the covariance between the two signals. We extend the previously published relations for the basis image noise to the case where the original measurements are not independent and illustrate the importance of the input correlations by comparing dual-energy basis image noise resulting from the device mentioned above and a device measuring the photon numbers and the deposited energies consecutively.« less

  8. 40 CFR 60.44 - Standard for nitrogen oxides (NOX).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel...) derived from gaseous fossil fuel. (2) 129 ng/J heat input (0.30 lb/MMBtu) derived from liquid fossil fuel, liquid fossil fuel and wood residue, or gaseous fossil fuel and wood residue. (3) 300 ng/J heat input (0...

  9. 40 CFR 60.44 - Standard for nitrogen oxides (NOX).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel...) derived from gaseous fossil fuel. (2) 129 ng/J heat input (0.30 lb/MMBtu) derived from liquid fossil fuel, liquid fossil fuel and wood residue, or gaseous fossil fuel and wood residue. (3) 300 ng/J heat input (0...

  10. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  11. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    NASA Astrophysics Data System (ADS)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  12. Neural Computation of Surface Border Ownership and Relative Surface Depth from Ambiguous Contrast Inputs.

    PubMed

    Dresp-Langley, Birgitta; Grossberg, Stephen

    2016-01-01

    The segregation of image parts into foreground and background is an important aspect of the neural computation of 3D scene perception. To achieve such segregation, the brain needs information about border ownership; that is, the belongingness of a contour to a specific surface represented in the image. This article presents psychophysical data derived from 3D percepts of figure and ground that were generated by presenting 2D images composed of spatially disjoint shapes that pointed inward or outward relative to the continuous boundaries that they induced along their collinear edges. The shapes in some images had the same contrast (black or white) with respect to the background gray. Other images included opposite contrasts along each induced continuous boundary. Psychophysical results demonstrate conditions under which figure-ground judgment probabilities in response to these ambiguous displays are determined by the orientation of contrasts only, not by their relative contrasts, despite the fact that many border ownership cells in cortical area V2 respond to a preferred relative contrast. Studies are also reviewed in which both polarity-specific and polarity-invariant properties obtain. The FACADE and 3D LAMINART models are used to explain these data.

  13. Neural Computation of Surface Border Ownership and Relative Surface Depth from Ambiguous Contrast Inputs

    PubMed Central

    Dresp-Langley, Birgitta; Grossberg, Stephen

    2016-01-01

    The segregation of image parts into foreground and background is an important aspect of the neural computation of 3D scene perception. To achieve such segregation, the brain needs information about border ownership; that is, the belongingness of a contour to a specific surface represented in the image. This article presents psychophysical data derived from 3D percepts of figure and ground that were generated by presenting 2D images composed of spatially disjoint shapes that pointed inward or outward relative to the continuous boundaries that they induced along their collinear edges. The shapes in some images had the same contrast (black or white) with respect to the background gray. Other images included opposite contrasts along each induced continuous boundary. Psychophysical results demonstrate conditions under which figure-ground judgment probabilities in response to these ambiguous displays are determined by the orientation of contrasts only, not by their relative contrasts, despite the fact that many border ownership cells in cortical area V2 respond to a preferred relative contrast. Studies are also reviewed in which both polarity-specific and polarity-invariant properties obtain. The FACADE and 3D LAMINART models are used to explain these data. PMID:27516746

  14. Black optic display

    DOEpatents

    Veligdan, James T.

    1997-01-01

    An optical display includes a plurality of stacked optical waveguides having first and second opposite ends collectively defining an image input face and an image screen, respectively, with the screen being oblique to the input face. Each of the waveguides includes a transparent core bound by a cladding layer having a lower index of refraction for effecting internal reflection of image light transmitted into the input face to project an image on the screen, with each of the cladding layers including a cladding cap integrally joined thereto at the waveguide second ends. Each of the cores is beveled at the waveguide second end so that the cladding cap is viewable through the transparent core. Each of the cladding caps is black for absorbing external ambient light incident upon the screen for improving contrast of the image projected internally on the screen.

  15. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  16. Dependence of image quality on image operator and noise for optical diffusion tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1998-04-01

    By applying linear perturbation theory to the radiation transport equation, the inverse problem of optical diffusion tomography can be reduced to a set of linear equations, W(mu) equals R, where W is the weight function, (mu) are the cross- section perturbations to be imaged, and R is the detector readings perturbations. We have studied the dependence of image quality on added systematic error and/or random noise in W and R. Tomographic data were collected from cylindrical phantoms, with and without added inclusions, using Monte Carlo methods. Image reconstruction was accomplished using a constrained conjugate gradient descent method. Result show that accurate images containing few artifacts are obtained when W is derived from a reference states whose optical thickness matches that of the unknown teste medium. Comparable image quality was also obtained for unmatched W, but the location of the target becomes more inaccurate as the mismatch increases. Results of the noise study show that image quality is much more sensitive to noise in W than in R, and the impact of noise increase with the number of iterations. Images reconstructed after pure noise was substituted for R consistently contain large peaks clustered about the cylinder axis, which was an initially unexpected structure. In other words, random input produces a non- random output. This finding suggests that algorithms sensitive to the evolution of this feature could be developed to suppress noise effects.

  17. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  18. Knowledge-based decision tree approach for mapping spatial distribution of rice crop using C-band synthetic aperture radar-derived information

    NASA Astrophysics Data System (ADS)

    Mishra, Varun Narayan; Prasad, Rajendra; Kumar, Pradeep; Srivastava, Prashant K.; Rai, Praveen Kumar

    2017-10-01

    Updated and accurate information of rice-growing areas is vital for food security and investigating the environmental impact of rice ecosystems. The intent of this work is to explore the feasibility of dual-polarimetric C-band Radar Imaging Satellite-1 (RISAT-1) data in delineating rice crop fields from other land cover features. A two polarization combination of RISAT-1 backscatter, namely ratio (HH/HV) and difference (HH-HV), significantly enhanced the backscatter difference between rice and nonrice categories. With these inputs, a QUEST decision tree (DT) classifier is successfully employed to extract the spatial distribution of rice crop areas. The results showed the optimal polarization combination to be HH along with HH/HV and HH-HV for rice crop mapping with an accuracy of 88.57%. Results were further compared with a Landsat-8 operational land imager (OLI) optical sensor-derived rice crop map. Spatial agreement of almost 90% was achieved between outputs produced from Landsat-8 OLI and RISAT-1 data. The simplicity of the approach used in this work may serve as an effective tool for rice crop mapping.

  19. Photolysis Rate Coefficient Calculations in Support of SOLVE Campaign

    NASA Technical Reports Server (NTRS)

    Lloyd, Steven A.; Swartz, William H.

    2001-01-01

    The objectives for this SOLVE project were 3-fold. First, we sought to calculate a complete set of photolysis rate coefficients (j-values) for the campaign along the ER-2 and DC-8 flight tracks. En route to this goal, it would be necessary to develop a comprehensive set of input geophysical conditions (e.g., ozone profiles), derived from various climatological, aircraft, and remotely sensed datasets, in order to model the radiative transfer of the atmosphere accurately. These j-values would then need validation by comparison with flux-derived j-value measurements. The second objective was to analyze chemistry along back trajectories using the NASA/Goddard chemistry trajectory model initialized with measurements of trace atmospheric constituents. This modeling effort would provide insight into the completeness of current measurements and the chemistry of Arctic wintertime ozone loss. Finally, we sought to coordinate stellar occultation measurements of ozone (and thus ozone loss) during SOLVE using the Midcourse Space Experiment(MSX)/Ultraviolet and Visible Imagers and Spectrographic Imagers (UVISI) satellite instrument. Such measurements would determine ozone loss during the Arctic polar night and represent the first significant science application of space-based stellar occultation in the Earth's atmosphere.

  20. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  1. Infrared analysis of LMC superbubbles

    NASA Technical Reports Server (NTRS)

    Verter, Fran; Dwek, Eli

    1990-01-01

    Researchers are analyzing three superbubbles in the Large Magellanic Cloud (LMC), cataloged by Meaburn (1980) as LMC-1, LMC-4 (a.k.a. Shapley Constellation III), and LMC-5. Superbubbles are the largest infrared sources in the disks of external galaxies. Their expansion requires multiple supernovae from successive generations of star formation. In LMC superbubbles, the grains swept up by shocks and winds represent an interstellar medium (ISM) whose abundances are quite different from the Galaxy. By applying the Dwek (1986) grain model, we can derive the composition and size spectrum of the grains. The inputs to this model are the dust emission in the four Infrared Astronomy Satellite (IRAS) bands and the interstellar radiation field (ISRF) that provides the heating. The first step in the project is to derive the ISRF for star-forming regions on the periphery of superbubbles. Researchers are doing this by combining observations at several wavelengths to determine the energy budget of the region. They will use a UV image to trace the ionizing stellar radiation that escapes, an H alpha image to trace the ionizing stellar radiation that is absorbed by gas, and the four IRAS images to trace the stellar radiation, both ionizing and non-ionizing, that is absorbed by dust. This multi-wavelength approach has the advantages that we do not have to assume the shape of the IMF or the extinction of the source.

  2. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    NASA Astrophysics Data System (ADS)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  3. Noise performance limits of advanced x-ray imagers employing poly-Si-based active pixel architectures

    NASA Astrophysics Data System (ADS)

    Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao

    2011-03-01

    A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.

  4. Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data

    NASA Technical Reports Server (NTRS)

    Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan

    2013-01-01

    High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.

  5. Input Scanners: A Growing Impact In A Diverse Marketplace

    NASA Astrophysics Data System (ADS)

    Marks, Kevin E.

    1989-08-01

    Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues

  6. Scheme of Optical Image Encryption with Digital Information Input and Dynamic Encryption Key based on Two LC SLMs

    NASA Astrophysics Data System (ADS)

    Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.

    Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.

  7. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  8. Spatial and temporal variations of aeolian sediment input to the tributaries (the Ten Kongduis) of the upper Yellow River

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Shi, Changxing

    2018-02-01

    The Ten Kongduis of the upper Yellow River, located in Inner Mongolia, northern China, is an area with active wind-water coupled erosion and hence one of the main sediment sources of the Yellow River. In this study, we analyzed the characteristics of spatial and temporal variations of aeolian sediment input to the river channel. For this purpose, three segments of sand dune-covered banks of the Maobula and the Xiliugou kongduis were investigated three times from November 2014 to November 2015 using a 3-D laser scanner, and the displacement of banks of desert reaches of three kongduis was derived from interpreting remote sensing images taking in the years from 2005 to 2015. The data of the surveyed sand dunes reveal that the middle kongduis were fed by aeolian sand through the sand dunes moving towards the river channels. The amount of aeolian sediment input was estimated to be about 14.94 × 104 t/yr in the Maobula Kongdui and about 5.76 × 104 t/yr in the Xiliugou Kongdui during the period from November 2014 to November 2015. According to the interpretation results of remote sensing images, the amount of aeolian sediment input to the Maobula Kongdui was about 15.74 × 104 t in 2011 and 18.2 × 104 t in 2012. In the Xiliugou Kongdui, it was in the range of 9.52 × 104 - 9.99 × 104 t in 2012 and in the springs of 2013 and 2015. In the Hantaichuan Kongdui, it was 7.04 × 104 t in 2012, 7.53 × 104 t in the spring of 2013, and 8.52 × 104 t in the spring of 2015. Owing to the changes in wind and rainfall, both interseasonal and interannual sediment storage and release mechanisms exist in the processes of aeolian sand being delivered into the kongduis. However, all of the aeolian sediment input to the Ten Kongduis should be delivered downstream by the river flows during a long term.

  9. 40 CFR 60.44 - Standard for nitrogen oxides (NOX).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... NO2 in excess of: (1) 86 ng/J heat input (0.20 lb/MMBtu) derived from gaseous fossil fuel. (2) 129 ng/J heat input (0.30 lb/MMBtu) derived from liquid fossil fuel, liquid fossil fuel and wood residue...

  10. 40 CFR 60.44 - Standard for nitrogen oxides (NOX).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... NO2 in excess of: (1) 86 ng/J heat input (0.20 lb/MMBtu) derived from gaseous fossil fuel. (2) 129 ng/J heat input (0.30 lb/MMBtu) derived from liquid fossil fuel, liquid fossil fuel and wood residue...

  11. 40 CFR 60.43 - Standard for sulfur dioxide (SO2).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel.../J heat input (0.80 lb/MMBtu) derived from liquid fossil fuel or liquid fossil fuel and wood residue. (2) 520 ng/J heat input (1.2 lb/MMBtu) derived from solid fossil fuel or solid fossil fuel and wood...

  12. 40 CFR 60.43 - Standard for sulfur dioxide (SO2).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel.../J heat input (0.80 lb/MMBtu) derived from liquid fossil fuel or liquid fossil fuel and wood residue. (2) 520 ng/J heat input (1.2 lb/MMBtu) derived from solid fossil fuel or solid fossil fuel and wood...

  13. 40 CFR 60.44 - Standard for nitrogen oxides (NOX).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... NO2 in excess of: (1) 86 ng/J heat input (0.20 lb/MMBtu) derived from gaseous fossil fuel. (2) 129 ng/J heat input (0.30 lb/MMBtu) derived from liquid fossil fuel, liquid fossil fuel and wood residue...

  14. 40 CFR 60.43 - Standard for sulfur dioxide (SO2).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel.../J heat input (0.80 lb/MMBtu) derived from liquid fossil fuel or liquid fossil fuel and wood residue. (2) 520 ng/J heat input (1.2 lb/MMBtu) derived from solid fossil fuel or solid fossil fuel and wood...

  15. Flight instrument and telemetry response and its inversion

    NASA Technical Reports Server (NTRS)

    Weinberger, M. R.

    1971-01-01

    Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.

  16. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  17. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  18. Analysis of severe storm data

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1983-01-01

    The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.

  19. Influence of orographically steered winds on Mutsu Bay surface currents

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Satoshi; Kawamura, Hiroshi

    2005-09-01

    Effects of spatially dependent sea surface wind field on currents in Mutsu Bay, which is located at the northern end of Japanese Honshu Island, are investigated using winds derived from synthetic aperture radar (SAR) images and a numerical model. A characteristic wind pattern over the bay was evidenced from analysis of 118 SAR images and coincided with in situ observations. Wind is topographically steered with easterly winds entering the bay through the terrestrial gap and stronger wind blowing over the central water toward its mouth. Nearshore winds are weaker due to terrestrial blockages. Using the Princeton Ocean Model, we investigated currents forced by the observed spatially dependent wind field. The predicted current pattern agrees well with available observations. For a uniform wind field of equal magnitude and average direction, the circulation pattern departs from observations demonstrating that vorticity input due to spatially dependent wind stress is essential in generation of the wind-driven current in Mutsu Bay.

  20. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  1. Mapping Land Cover Types in Amazon Basin Using 1km JERS-1 Mosaic

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan S.; Nelson, Bruce; Podest, Erika; Holt, John

    2000-01-01

    In this paper, the 100 meter JERS-1 Amazon mosaic image was used in a new classifier to generate a I km resolution land cover map. The inputs to the classifier were 1 km resolution mean backscatter and seven first order texture measures derived from the 100 m data by using a 10 x 10 independent sampling window. The classification approach included two interdependent stages: 1) a supervised maximum a posteriori Bayesian approach to classify the mean backscatter image into 5 general land cover categories of forest, savannah, inundated, white sand, and anthropogenic vegetation classes, and 2) a texture measure decision rule approach to further discriminate subcategory classes based on taxonomic information and biomass levels. Fourteen classes were successfully separated at 1 km scale. The results were verified by examining the accuracy of the approach by comparison with the IBGE and the AVHRR 1 km resolution land cover maps.

  2. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  3. Multisensor satellite data integration for sea surface wind speed and direction determination

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Pihos, G. G.; Wheelock, S. L.

    1984-01-01

    Techniques to integrate meteorological data from various satellite sensors to yield a global measure of sea surface wind speed and direction for input to the Navy's operational weather forecast models were investigated. The sensors were launched or will be launched, specifically the GOES visible and infrared imaging sensor, the Nimbus-7 SMMR, and the DMSP SSM/I instrument. An algorithm for the extrapolation to the sea surface of wind directions as derived from successive GOES cloud images was developed. This wind veering algorithm is relatively simple, accounts for the major physical variables, and seems to represent the best solution that can be found with existing data. An algorithm for the interpolation of the scattered observed data to a common geographical grid was implemented. The algorithm is based on a combination of inverse distance weighting and trend surface fitting, and is suited to combing wind data from disparate sources.

  4. Distribution of Potential Hydrothermally Altered Rocks in Central Colorado Derived From Landsat Thematic Mapper Data: A Geographic Information System Data Set

    USGS Publications Warehouse

    Knepper, Daniel H.

    2010-01-01

    As part of the Central Colorado Mineral Resource Assessment Project, the digital image data for four Landsat Thematic Mapper scenes covering central Colorado between Wyoming and New Mexico were acquired and band ratios were calculated after masking pixels dominated by vegetation, snow, and terrain shadows. Ratio values were visually enhanced by contrast stretching, revealing only those areas with strong responses (high ratio values). A color-ratio composite mosaic was prepared for the four scenes so that the distribution of potentially hydrothermally altered rocks could be visually evaluated. To provide a more useful input to a Geographic Information System-based mineral resource assessment, the information contained in the color-ratio composite raster image mosaic was converted to vector-based polygons after thresholding to isolate the strongest ratio responses and spatial filtering to reduce vector complexity and isolate the largest occurrences of potentially hydrothermally altered rocks.

  5. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array

    PubMed Central

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-01-01

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372

  6. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  7. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  8. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  9. Robust Mapping of Incoherent Fiber-Optic Bundles

    NASA Technical Reports Server (NTRS)

    Roberts, Harry E.; Deason, Brent E.; DePlachett, Charles P.; Pilgrim, Robert A.; Sanford, Harold S.

    2007-01-01

    A method and apparatus for mapping between the positions of fibers at opposite ends of incoherent fiber-optic bundles have been invented to enable the use of such bundles to transmit images in visible or infrared light. The method is robust in the sense that it provides useful mapping even for a bundle that contains thousands of narrow, irregularly packed fibers, some of which may be defective. In a coherent fiber-optic bundle, the input and output ends of each fiber lie at identical positions in the input and output planes; therefore, the bundle can be used to transmit images without further modification. Unfortunately, the fabrication of coherent fiber-optic bundles is too labor-intensive and expensive for many applications. An incoherent fiber-optic bundle can be fabricated more easily and at lower cost, but it produces a scrambled image because the position of the end of each fiber in the input plane is generally different from the end of the same fiber in the output plane. However, the image transmitted by an incoherent fiber-optic bundle can be unscrambled (or, from a different perspective, decoded) by digital processing of the output image if the mapping between the input and output fiber-end positions is known. Thus, the present invention enables the use of relatively inexpensive fiber-optic bundles to transmit images.

  10. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  11. Intra- and interspecific variation in tropical tree and liana phenology derived from Unmanned Aerial Vehicle images

    NASA Astrophysics Data System (ADS)

    Bohlman, S.; Park, J.; Muller-Landau, H. C.; Rifai, S. W.; Dandois, J. P.

    2017-12-01

    Phenology is a critical driver of ecosystem processes. There is strong evidence that phenology is shifting in temperate ecosystems in response to climate change, but tropical tree and liana phenology remains poorly quantified and understood. A key challenge is that tropical forests contain hundreds of plant species with a wide variety of phenological patterns. Satellite-based observations, an important source of phenology data in northern latitudes, are hindered by frequent cloud cover in the tropics. To quantify phenology over a large number of individuals and species, we collected bi-weekly images from unmanned aerial vehicles (UAVs) in the well-studied 50-ha forest inventory plot on Barro Colorado Island, Panama. Between October 2014 and December 2015 and again in May 2015, we collected a total of 35 sets of UAV images, each with continuous coverage of the 50-ha plot, where every tree ≥ 1 cm DBH is mapped. Spectral, texture, and image information was extracted from the UAV images for individual tree crowns, which was then used as inputs for a machine learning algorithm to predict percent leaf and branch cover. We obtained the species identities of 2000 crowns in the images via field mapping. The objectives of this study are to (1) determined if machine learning algorithms, applied to UAV images, can effectively quantify changes in leaf cover, which we term "deciduousness; (2) determine how liana cover effects deciduousness and (3) test how well UAV-derived deciduousness patterns match satellite-derived temporal patterns. Machine learning algorithms trained on a variety of image parameters could effectively determine leaf cover, despite variation in lighting and viewing angles. Crowns with higher liana cover have less overall deciduousness (tree + liana together) than crowns with lower liana cover. Individual crown deciduousness, summed over all crowns measured in the 50-ha plot, showed a similar seasonal pattern as MODIS EVI composited over 10 years. However, MODIS EVI phenology was "greened" up earlier than UAV-based deciduousness, perhaps reflecting the new late dry season leaf flush that increases EVI but not overall leaf cover. We discuss how the potential mechanisms that explain variation among species and between trees and lianas and the consequences for these variation for ecosystem processes and modeling.

  12. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.

  13. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  14. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  15. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  16. Synaptic plasticity in a cerebellum-like structure depends on temporal order

    NASA Astrophysics Data System (ADS)

    Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty

    1997-05-01

    Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.

  17. ESIM: Edge Similarity for Screen Content Image Quality Assessment.

    PubMed

    Ni, Zhangkai; Ma, Lin; Zeng, Huanqiang; Chen, Jing; Cai, Canhui; Ma, Kai-Kuang

    2017-10-01

    In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

  18. Adherent Raindrop Modeling, Detectionand Removal in Video.

    PubMed

    You, Shaodi; Tan, Robby T; Kawakami, Rei; Mukaigawa, Yasuhiro; Ikeuchi, Katsushi

    2016-09-01

    Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method.

  19. Investigating Uncertainty in Predicting Carbon Dynamics in North American Biomes: Putting Support-Effect Bias in Perspective

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Brass, Jim (Technical Monitor)

    2001-01-01

    A fundamental strategy in NASA's Earth Observing System's (EOS) monitoring of vegetation and its contribution to the global carbon cycle is to rely on deterministic, process-based ecosystem models to make predictions of carbon flux over large regions. These models are parameterized (that is, the input variables are derived) using remotely sensed images such as those from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground measurements and interpolated maps. Since early applications of these models, investigators have noted that results depend partly on the spatial support of the input variables. In general, the larger the support of the input data, the greater the chance that the effects of important components of the ecosystem will be averaged out. A review of previous work shows that using large supports can cause either positive or negative bias in carbon flux predictions. To put the magnitude and direction of these biases in perspective, we must quantify the range of uncertainty on our best measurements of carbon-related variables made on equivalent areas. In other words, support-effect bias should be placed in the context of prediction uncertainty from other sources. If the range of uncertainty at the smallest support is less than the support-effect bias, more research emphasis should probably be placed on support sizes that are intermediate between those of field measurements and MODIS. If the uncertainty range at the smallest support is larger than the support-effect bias, the accuracy of MODIS-based predictions will be difficult to quantify and more emphasis should be placed on field-scale characterization and sampling. This talk will describe methods to address these issues using a field measurement campaign in North America and "upscaling" using geostatistical estimation and simulation.

  20. Automatic Boosted Flood Mapping from Satellite Data

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence

    2016-01-01

    Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.

  1. Derived crop management data for the LandCarbon Project

    USGS Publications Warehouse

    Schmidt, Gail; Liu, Shu-Guang; Oeding, Jennifer

    2011-01-01

    The LandCarbon project is assessing potential carbon pools and greenhouse gas fluxes under various scenarios and land management regimes to provide information to support the formulation of policies governing climate change mitigation, adaptation and land management strategies. The project is unique in that spatially explicit maps of annual land cover and land-use change are created at the 250-meter pixel resolution. The project uses vast amounts of data as input to the models, including satellite, climate, land cover, soil, and land management data. Management data have been obtained from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) and USDA Economic Research Service (ERS) that provides information regarding crop type, crop harvesting, manure, fertilizer, tillage, and cover crop (U.S. Department of Agriculture, 2011a, b, c). The LandCarbon team queried the USDA databases to pull historic crop-related management data relative to the needs of the project. The data obtained was in table form with the County or State Federal Information Processing Standard (FIPS) and the year as the primary and secondary keys. Future projections were generated for the A1B, A2, B1, and B2 Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) scenarios using the historic data values along with coefficients generated by the project. The PBL Netherlands Environmental Assessment Agency (PBL) Integrated Model to Assess the Global Environment (IMAGE) modeling framework (Integrated Model to Assess the Global Environment, 2006) was used to develop coefficients for each IPCC SRES scenario, which were applied to the historic management data to produce future land management practice projections. The LandCarbon project developed algorithms for deriving gridded data, using these tabular management data products as input. The derived gridded crop type, crop harvesting, manure, fertilizer, tillage, and cover crop products are used as input to the LandCarbon models to represent the historic and the future scenario management data. The overall algorithm to generate each of the gridded management products is based on the land cover and the derived crop type. For each year in the land cover dataset, the algorithm loops through each 250-meter pixel in the ecoregion. If the current pixel in the land cover dataset is an agriculture pixel, then the crop type is determined. Once the crop type is derived, then the crop harvest, manure, fertilizer, tillage, and cover crop values are derived independently for that crop type. The following is the overall algorithm used for the set of derived grids. The specific algorithm to generate each management dataset is discussed in the respective section for that dataset, along with special data handling and a description of the output product.

  2. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    NASA Technical Reports Server (NTRS)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  3. Open-Source Assisted Laboratory Automation through Graphical User Interfaces and 3D Printers: Application to Equipment Hyphenation for Higher-Order Data Generation.

    PubMed

    Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C

    2017-10-17

    Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.

  4. A neural network gravitational arc finder based on the Mediatrix filamentation method

    NASA Astrophysics Data System (ADS)

    Bom, C. R.; Makler, M.; Albuquerque, M. P.; Brandt, C. H.

    2017-01-01

    Context. Automated arc detection methods are needed to scan the ongoing and next-generation wide-field imaging surveys, which are expected to contain thousands of strong lensing systems. Arc finders are also required for a quantitative comparison between predictions and observations of arc abundance. Several algorithms have been proposed to this end, but machine learning methods have remained as a relatively unexplored step in the arc finding process. Aims: In this work we introduce a new arc finder based on pattern recognition, which uses a set of morphological measurements that are derived from the Mediatrix filamentation method as entries to an artificial neural network (ANN). We show a full example of the application of the arc finder, first training and validating the ANN on simulated arcs and then applying the code on four Hubble Space Telescope (HST) images of strong lensing systems. Methods: The simulated arcs use simple prescriptions for the lens and the source, while mimicking HST observational conditions. We also consider a sample of objects from HST images with no arcs in the training of the ANN classification. We use the training and validation process to determine a suitable set of ANN configurations, including the combination of inputs from the Mediatrix method, so as to maximize the completeness while keeping the false positives low. Results: In the simulations the method was able to achieve a completeness of about 90% with respect to the arcs that are input into the ANN after a preselection. However, this completeness drops to 70% on the HST images. The false detections are on the order of 3% of the objects detected in these images. Conclusions: The combination of Mediatrix measurements with an ANN is a promising tool for the pattern-recognition phase of arc finding. More realistic simulations and a larger set of real systems are needed for a better training and assessment of the efficiency of the method.

  5. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    PubMed

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  7. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    NASA Astrophysics Data System (ADS)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  8. The Geoscience Spaceborne Imaging Spectroscopy Technical Committees Calibration and Validation Workshop

    NASA Technical Reports Server (NTRS)

    Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy

    2016-01-01

    Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.

  9. Real-time edge-enhanced optical correlator

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)

    1992-01-01

    Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.

  10. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  11. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  12. Volume illustration of muscle from diffusion tensor images.

    PubMed

    Chen, Wei; Yan, Zhicheng; Zhang, Song; Crow, John Allen; Ebert, David S; McLaughlin, Ronald M; Mullins, Katie B; Cooper, Robert; Ding, Zi'ang; Liao, Jun

    2009-01-01

    Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodigas, Timothy J.; Hinz, Philip M.; Malhotra, Renu, E-mail: rodigas@as.arizona.edu

    Planets can affect debris disk structure by creating gaps, sharp edges, warps, and other potentially observable signatures. However, there is currently no simple way for observers to deduce a disk-shepherding planet's properties from the observed features of the disk. Here we present a single equation that relates a shepherding planet's maximum mass to the debris ring's observed width in scattered light, along with a procedure to estimate the planet's eccentricity and minimum semimajor axis. We accomplish this by performing dynamical N-body simulations of model systems containing a star, a single planet, and an exterior disk of parent bodies and dustmore » grains to determine the resulting debris disk properties over a wide range of input parameters. We find that the relationship between planet mass and debris disk width is linear, with increasing planet mass producing broader debris rings. We apply our methods to five imaged debris rings to constrain the putative planet masses and orbits in each system. Observers can use our empirically derived equation as a guide for future direct imaging searches for planets in debris disk systems. In the fortuitous case of an imaged planet orbiting interior to an imaged disk, the planet's maximum mass can be estimated independent of atmospheric models.« less

  14. Assessment of geostatistical features for object-based image classification of contrasted landscape vegetation cover

    NASA Astrophysics Data System (ADS)

    de Oliveira Silveira, Eduarda Martiniano; de Menezes, Michele Duarte; Acerbi Júnior, Fausto Weimar; Castro Nunes Santos Terra, Marcela; de Mello, José Márcio

    2017-07-01

    Accurate mapping and monitoring of savanna and semiarid woodland biomes are needed to support the selection of areas of conservation, to provide sustainable land use, and to improve the understanding of vegetation. The potential of geostatistical features, derived from medium spatial resolution satellite imagery, to characterize contrasted landscape vegetation cover and improve object-based image classification is studied. The study site in Brazil includes cerrado sensu stricto, deciduous forest, and palm swamp vegetation cover. Sentinel 2 and Landsat 8 images were acquired and divided into objects, for each of which a semivariogram was calculated using near-infrared (NIR) and normalized difference vegetation index (NDVI) to extract the set of geostatistical features. The features selected by principal component analysis were used as input data to train a random forest algorithm. Tests were conducted, combining spectral and geostatistical features. Change detection evaluation was performed using a confusion matrix and its accuracies. The semivariogram curves were efficient to characterize spatial heterogeneity, with similar results using NIR and NDVI from Sentinel 2 and Landsat 8. Accuracy was significantly greater when combining geostatistical features with spectral data, suggesting that this method can improve image classification results.

  15. In Situ Surface Characterization

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Leger, Patrick C.; Yanovsky, Igor

    2011-01-01

    Operation of in situ space assets, such as rovers and landers, requires operators to acquire a thorough understanding of the environment surrounding the spacecraft. The following programs help with that understanding by providing higher-level information characterizing the surface, which is not immediately obvious by just looking at the XYZ terrain data. This software suite covers three primary programs: marsuvw, marsrough, and marsslope, and two secondary programs, which together use XYZ data derived from in situ stereo imagery to characterize the surface by determining surface normal, surface roughness, and various aspects of local slope, respectively. These programs all use the Planetary Image Geometry (PIG) library to read mission-specific data files. The programs themselves are completely multimission; all mission dependencies are handled by PIG. The input data consists of images containing XYZ locations as derived by, e.g., marsxyz. The marsuvw program determines surface normals from XYZ data by gathering XYZ points from an area around each pixel and fitting a plane to those points. Outliers are rejected, and various consistency checks are applied. The result shows the orientation of the local surface at each point as a unit vector. The program can be run in two modes: standard, which is typically used for in situ arm work, and slope, which is typically used for rover mobility. The difference is primarily due to optimizations necessary for the larger patch sizes in the slope case. The marsrough program determines surface roughness in a small area around each pixel, which is defined as the maximum peak-to-peak deviation from the plane perpendicular to the surface normal at that pixel. The marsslope program takes a surface normal file as input and derives one of several slope-like outputs from it. The outputs include slope, slope rover direction (a measure of slope radially away from the rover), slope heading, slope magnitude, northerly tilt, and solar energy (compares the slope with the Sun s location at local noon). The marsuvwproj program projects a surface normal onto an arbitrary plane in space, resulting in a normalized 3D vector, which is constrained to lie in the plane. The marsuvwrot program rotates the vectors in a surface normal file, generating a new surface normal file. It also can change coordinate systems for an existing surface normal file. While the algorithms behind this suite are not particularly unique, what makes the programs useful is their integration into the larger in situ image processing system via the PIG library. They work directly with space in situ data, understanding the appropriate image metadata fields and updating them properly. The secondary programs (marsuvwproj, marsuvwrot) were originally developed to deal with anomalous situations on Opportunity and Spirit, respectively, but may have more general applicability.

  16. Combining a wavelet transform with a channelized Hotelling observer for tumor detection in 3D PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Tomei, Sandrine; Maxim, Voichita; Odet, Christophe

    2007-03-01

    This study evaluates new observer models for 3D whole-body Positron Emission Tomography (PET) imaging based on a wavelet sub-band decomposition and compares them with the classical constant-Q CHO model. Our final goal is to develop an original method that performs guided detection of abnormal activity foci in PET oncology imaging based on these new observer models. This computer-aided diagnostic method would highly benefit to clinicians for diagnostic purpose and to biologists for massive screening of rodents populations in molecular imaging. Method: We have previously shown good correlation of the channelized Hotelling observer (CHO) using a constant-Q model with human observer performance for 3D PET oncology imaging. We propose an alternate method based on combining a CHO observer with a wavelet sub-band decomposition of the image and we compare it to the standard CHO implementation. This method performs an undecimated transform using a biorthogonal B-spline 4/4 wavelet basis to extract the features set for input to the Hotelling observer. This work is based on simulated 3D PET images of an extended MCAT phantom with randomly located lesions. We compare three evaluation criteria: classification performance using the signal-to-noise ratio (SNR), computation efficiency and visual quality of the derived 3D maps of the decision variable λ. The SNR is estimated on a series of test images for a variable number of training images for both observers. Results: Results show that the maximum SNR is higher with the constant-Q CHO observer, especially for targets located in the liver, and that it is reached with a smaller number of training images. However, preliminary analysis indicates that the visual quality of the 3D maps of the decision variable λ is higher with the wavelet-based CHO and the computation time to derive a 3D λ-map is about 350 times shorter than for the standard CHO. This suggests that the wavelet-CHO observer is a good candidate for use in our guided detection method.

  17. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  18. Inputs for subject-specific computational fluid dynamics simulation of blood flow in the mouse aorta.

    PubMed

    Van Doormaal, Mark; Zhou, Yu-Qing; Zhang, Xiaoli; Steinman, David A; Henkelman, R Mark

    2014-10-01

    Mouse models are an important way for exploring relationships between blood hemodynamics and eventual plaque formation. We have developed a mouse model of aortic regurgitation (AR) that produces large changes in plaque burden with charges in hemodynamics [Zhou et al., 2010, "Aortic Regurgitation Dramatically Alters the Distribution of Atherosclerotic Lesions and Enhances Atherogenesis in Mice," Arterioscler. Thromb. Vasc. Biol., 30(6), pp. 1181-1188]. In this paper, we explore the amount of detail needed for realistic computational fluid dynamics (CFD) calculations in this experimental model. The CFD calculations use inputs based on experimental measurements from ultrasound (US), micro computed tomography (CT), and both anatomical magnetic resonance imaging (MRI) and phase contrast MRI (PC-MRI). The adequacy of five different levels of model complexity (a) subject-specific CT data from a single mouse; (b) subject-specific CT centerlines with radii from US; (c) same as (b) but with MRI derived centerlines; (d) average CT centerlines and averaged vessel radius and branching vessels; and (e) same as (d) but with averaged MRI centerlines) is evaluated by demonstrating their impact on relative residence time (RRT) outputs. The paper concludes by demonstrating the necessity of subject-specific geometry and recommends for inputs the use of CT or anatomical MRI for establishing the aortic centerlines, M-mode US for scaling the aortic diameters, and a combination of PC-MRI and Doppler US for estimating the spatial and temporal characteristics of the input wave forms.

  19. Iterative pixelwise approach applied to computer-generated holograms and diffractive optical elements.

    PubMed

    Hsu, Wei-Feng; Lin, Shih-Chih

    2018-01-01

    This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.

  20. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  1. Genetic study of multimodal imaging Alzheimer's disease progression score implicates novel loci.

    PubMed

    Scelsi, Marzia A; Khan, Raiyan R; Lorenzi, Marco; Christopher, Leigh; Greicius, Michael D; Schott, Jonathan M; Ourselin, Sebastien; Altmann, Andre

    2018-05-30

    Identifying genetic risk factors underpinning different aspects of Alzheimer's disease has the potential to provide important insights into pathogenesis. Moving away from simple case-control definitions, there is considerable interest in using quantitative endophenotypes, such as those derived from imaging as outcome measures. Previous genome-wide association studies of imaging-derived biomarkers in sporadic late-onset Alzheimer's disease focused only on phenotypes derived from single imaging modalities. In contrast, we computed a novel multi-modal neuroimaging phenotype comprising cortical amyloid burden and bilateral hippocampal volume. Both imaging biomarkers were used as input to a disease progression modelling algorithm, which estimates the biomarkers' long-term evolution curves from population-based longitudinal data. Among other parameters, the algorithm computes the shift in time required to optimally align a subjects' biomarker trajectories with these population curves. This time shift serves as a disease progression score and it was used as a quantitative trait in a discovery genome-wide association study with n = 944 subjects from the Alzheimer's Disease Neuroimaging Initiative database diagnosed as Alzheimer's disease, mild cognitive impairment or healthy at the time of imaging. We identified a genome-wide significant locus implicating LCORL (rs6850306, chromosome 4; P = 1.03 × 10-8). The top variant rs6850306 was found to act as an expression quantitative trait locus for LCORL in brain tissue. The clinical role of rs6850306 in conversion from healthy ageing to mild cognitive impairment or Alzheimer's disease was further validated in an independent cohort comprising healthy, older subjects from the National Alzheimer's Coordinating Center database. Specifically, possession of a minor allele at rs6850306 was protective against conversion from mild cognitive impairment to Alzheimer's disease in the National Alzheimer's Coordinating Center cohort (hazard ratio = 0.593, 95% confidence interval = 0.387-0.907, n = 911, PBonf = 0.032), in keeping with the negative direction of effect reported in the genome-wide association study (βdisease progression score = -0.07 ± 0.01). The implicated locus is linked to genes with known connections to Alzheimer's disease pathophysiology and other neurodegenerative diseases. Using multimodal imaging phenotypes in association studies may assist in unveiling the genetic drivers of the onset and progression of complex diseases.

  2. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  3. Energy flux and characteristic energy of an elemental auroral structure

    NASA Technical Reports Server (NTRS)

    Lanchester, B. S.; Palmer, J. R.; Rees, M. H.; Lummerzheim, D.; Kaila, K.; Turunen, T.

    1994-01-01

    Electron density profiles acquired with the EISCAT radar at 0.2 s time resolution, together with TV images and photometric intensities, were used to study the characteristics of thin (less than 1 km) auroral arc structures that drifted through the field of view of the instruments. It is demonstrated that both high time and space resolution are essential for deriving the input parameters of the electron flux responsible for the elemental auroral structures. One such structure required a 400 mW/sq m (erg/sq cm s) downward energy flux carried by an 8 keV monochromatic electron flux equivalent to a current density of 50 micro Angstrom/sq m.

  4. An assessment of support vector machines for land cover classification

    USGS Publications Warehouse

    Huang, C.; Davis, L.S.; Townshend, J.R.G.

    2002-01-01

    The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.

  5. Recognition of lesion correspondence on two mammographic views: a new method of false-positive reduction for computerized mass detection

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Petrick, Nicholas; Chan, Heang-Ping; Paquerault, Sophie; Helvie, Mark A.; Hadjiiski, Lubomir M.

    2001-07-01

    We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.

  6. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    USGS Publications Warehouse

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  7. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  8. Neural network diagnosis of avascular necrosis from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; Christy, Paul S.; Ehman, Richard L.

    1993-09-01

    We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.

  9. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  10. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  11. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    PubMed Central

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-01-01

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447

  12. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  13. The use of LiDAR-derived high-resolution DSM and intensity data to support modelling of urban flooding

    NASA Astrophysics Data System (ADS)

    Aktaruzzaman, Md.; Schmitt, Theo G.

    2011-11-01

    This paper addresses the issue of a detailed representation of an urban catchment in terms of hydraulic and hydrologic attributes. Modelling of urban flooding requires a detailed knowledge of urban surface characteristics. The advancement in spatial data acquisition technology such as airborne LiDAR (Light Detection and Ranging) has greatly facilitated the collection of high-resolution topographic information. While the use of the LiDAR-derived Digital Surface Model (DSM) has gained popularity over the last few years as input data for a flood simulation model, the use of LiDAR intensity data has remained largely unexplored in this regard. LiDAR intensity data are acquired along with elevation data during the data collection mission by an aircraft. The practice of using of just aerial images with RGB (Red, Green and Blue) wavebands is often incapable of identifying types of surface under the shadow. On the other hand, LiDAR intensity data can provide surface information independent of sunlight conditions. The focus of this study is the use of intensity data in combination with aerial images to accurately map pervious and impervious urban areas. This study presents an Object-Based Image Analysis (OBIA) framework for detecting urban land cover types, mainly pervious and impervious surfaces in order to improve the rainfall-runoff modelling. Finally, this study shows the application of highresolution DSM and land cover maps to flood simulation software in order to visualize the depth and extent of urban flooding phenomena.

  14. Initial testing of a 3D printed perfusion phantom using digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Wood, Rachel P.; Khobragade, Parag; Ying, Leslie; Snyder, Kenneth; Wack, David; Bednarek, Daniel R.; Rudin, Stephen; Ionita, Ciprian N.

    2015-03-01

    Perfusion imaging is the most applied modality for the assessment of acute stroke. Parameters such as Cerebral Blood Flow (CBF), Cerebral Blood volume (CBV) and Mean Transit Time (MTT) are used to distinguish the tissue infarct core and ischemic penumbra. Due to lack of standardization these parameters vary significantly between vendors and software even when provided with the same data set. There is a critical need to standardize the systems and make them more reliable. We have designed a uniform phantom to test and verify the perfusion systems. We implemented a flow loop with different flow rates (250, 300, 350 ml/min) and injected the same amount of contrast. The images of the phantom were acquired using a Digital Angiographic system. Since this phantom is uniform, projection images obtained using DSA is sufficient for initial validation. To validate the phantom we measured the contrast concentration at three regions of interest (arterial input, venous output, perfused area) and derived time density curves (TDC). We then calculated the maximum slope, area under the TDCs and flow. The maximum slope calculations were linearly increasing with increase in flow rate, the area under the curve decreases with increase in flow rate. There was 25% error between the calculated flow and measured flow. The derived TDCs were clinically relevant and the calculated flow, maximum slope and areas under the curve were sensitive to the measured flow. We have created a systematic way to calibrate existing perfusion systems and assess their reliability.

  15. DIRECT OBSERVATION OF SOLAR CORONAL MAGNETIC FIELDS BY VECTOR TOMOGRAPHY OF THE CORONAL EMISSION LINE POLARIZATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu

    We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less

  16. Phase and amplitude beam shaping with two deformable mirrors implementing input plane and Fourier plane phase modifications.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Rzasa, John R; Paulson, Daniel A; Davis, Christopher C

    2018-03-20

    We find that ideas in optical image encryption can be very useful for adaptive optics in achieving simultaneous phase and amplitude shaping of a laser beam. An adaptive optics system with simultaneous phase and amplitude shaping ability is very desirable for atmospheric turbulence compensation. Atmospheric turbulence-induced beam distortions can jeopardize the effectiveness of optical power delivery for directed-energy systems and optical information delivery for free-space optical communication systems. In this paper, a prototype adaptive optics system is proposed based on a famous image encryption structure. The major change is to replace the two random phase plates at the input plane and Fourier plane of the encryption system, respectively, with two deformable mirrors that perform on-demand phase modulations. A Gaussian beam is used as an input to replace the conventional image input. We show through theory, simulation, and experiments that the slightly modified image encryption system can be used to achieve arbitrary phase and amplitude beam shaping within the limits of stroke range and influence function of the deformable mirrors. In application, the proposed technique can be used to perform mode conversion between optical beams, generate structured light signals for imaging and scanning, and compensate atmospheric turbulence-induced phase and amplitude beam distortions.

  17. Long-term variation in above and belowground plant inputs alters soil organic matter biogeochemistry at the molecular-level

    NASA Astrophysics Data System (ADS)

    Simpson, M. J.; Pisani, O.; Lin, L.; Lun, O.; Simpson, A.; Lajtha, K.; Nadelhoffer, K. J.

    2015-12-01

    The long-term fate of soil carbon reserves with global environmental change remains uncertain. Shifts in moisture, altered nutrient cycles, species composition, or rising temperatures may alter the proportions of above and belowground biomass entering soil. However, it is unclear how long-term changes in plant inputs may alter the composition of soil organic matter (SOM) and soil carbon storage. Advanced molecular techniques were used to assess SOM composition in mineral soil horizons (0-10 cm) after 20 years of Detrital Input and Removal Treatment (DIRT) at the Harvard Forest. SOM biomarkers (solvent extraction, base hydrolysis and cupric (II) oxide oxidation) and both solid-state and solution-state nuclear magnetic resonance (NMR) spectroscopy were used to identify changes in SOM composition and stage of degradation. Microbial activity and community composition were assessed using phospholipid fatty acid (PLFA) analysis. Doubling aboveground litter inputs decreased soil carbon content, increased the degradation of labile SOM and enhanced the sequestration of aliphatic compounds in soil. The exclusion of belowground inputs (No roots and No inputs) resulted in a decrease in root-derived components and enhanced the degradation of leaf-derived aliphatic structures (cutin). Cutin-derived SOM has been hypothesized to be recalcitrant but our results show that even this complex biopolymer is susceptible to degradation when inputs entering soil are altered. The PLFA data indicate that changes in soil microbial community structure favored the accelerated processing of specific SOM components with littler manipulation. These results collectively reveal that the quantity and quality of plant litter inputs alters the molecular-level composition of SOM and in some cases, enhances the degradation of recalcitrant SOM. Our study also suggests that increased litterfall is unlikely to enhance soil carbon storage over the long-term in temperate forests.

  18. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  19. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

  20. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.

  1. Spacebased Estimation of Moisture Transport in Marine Atmosphere Using Support Vector Regression

    NASA Technical Reports Server (NTRS)

    Xie, Xiaosu; Liu, W. Timothy; Tang, Benyang

    2007-01-01

    An improved algorithm is developed based on support vector regression (SVR) to estimate horizonal water vapor transport integrated through the depth of the atmosphere ((Theta)) over the global ocean from observations of surface wind-stress vector by QuikSCAT, cloud drift wind vector derived from the Multi-angle Imaging SpectroRadiometer (MISR) and geostationary satellites, and precipitable water from the Special Sensor Microwave/Imager (SSM/I). The statistical relation is established between the input parameters (the surface wind stress, the 850 mb wind, the precipitable water, time and location) and the target data ((Theta) calculated from rawinsondes and reanalysis of numerical weather prediction model). The results are validated with independent daily rawinsonde observations, monthly mean reanalysis data, and through regional water balance. This study clearly demonstrates the improvement of (Theta) derived from satellite data using SVR over previous data sets based on linear regression and neural network. The SVR methodology reduces both mean bias and standard deviation comparedwith rawinsonde observations. It agrees better with observations from synoptic to seasonal time scales, and compare more favorably with the reanalysis data on seasonal variations. Only the SVR result can achieve the water balance over South America. The rationale of the advantage by SVR method and the impact of adding the upper level wind will also be discussed.

  2. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  3. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image classification of building damages.

  4. The NMR phased array.

    PubMed

    Roemer, P B; Edelstein, W A; Hayes, C E; Souza, S P; Mueller, O M

    1990-11-01

    We describe methods for simultaneously acquiring and subsequently combining data from a multitude of closely positioned NMR receiving coils. The approach is conceptually similar to phased array radar and ultrasound and hence we call our techniques the "NMR phased array." The NMR phased array offers the signal-to-noise ratio (SNR) and resolution of a small surface coil over fields-of-view (FOV) normally associated with body imaging with no increase in imaging time. The NMR phased array can be applied to both imaging and spectroscopy for all pulse sequences. The problematic interactions among nearby surface coils is eliminated (a) by overlapping adjacent coils to give zero mutual inductance, hence zero interaction, and (b) by attaching low input impedance preamplifiers to all coils, thus eliminating interference among next nearest and more distant neighbors. We derive an algorithm for combining the data from the phased array elements to yield an image with optimum SNR. Other techniques which are easier to implement at the cost of lower SNR are explored. Phased array imaging is demonstrated with high resolution (512 x 512, 48-cm FOV, and 32-cm FOV) spin-echo images of the thoracic and lumbar spine. Data were acquired from four-element linear spine arrays, the first made of 12-cm square coils and the second made of 8-cm square coils. When compared with images from a single 15 x 30-cm rectangular coil and identical imaging parameters, the phased array yields a 2X and 3X higher SNR at the depth of the spine (approximately 7 cm).

  5. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  6. An automated retinal imaging method for the early diagnosis of diabetic retinopathy.

    PubMed

    Franklin, S Wilfred; Rajan, S Edward

    2013-01-01

    Diabetic retinopathy is a microvascular complication of long-term diabetes and is the major cause for eyesight loss due to changes in blood vessels of the retina. Major vision loss due to diabetic retinopathy is highly preventable with regular screening and timely intervention at the earlier stages. Retinal blood vessel segmentation methods help to identify the successive stages of such sight threatening diseases like diabetes. To develop and test a novel retinal imaging method which segments the blood vessels automatically from retinal images, which helps the ophthalmologists in the diagnosis and follow-up of diabetic retinopathy. This method segments each image pixel as vessel or nonvessel, which in turn, used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariants-based features. Back propagation algorithm, which provides an efficient technique to change the weights in a feed forward network, is utilized in our method. Quantitative results of sensitivity, specificity and predictive values were obtained in our method and the measured accuracy of our segmentation algorithm was 95.3%, which is better than that presented by state-of-the-art approaches. The evaluation procedure used and the demonstrated effectiveness of our automated retinal imaging method proves itself as the most powerful tool to diagnose diabetic retinopathy in the earlier stages.

  7. A comparison of basic deinterlacing approaches for a computer assisted diagnosis approach of videoscope images

    NASA Astrophysics Data System (ADS)

    Kage, Andreas; Canto, Marcia; Gorospe, Emmanuel; Almario, Antonio; Münzenmayer, Christian

    2010-03-01

    In the near future, Computer Assisted Diagnosis (CAD) which is well known in the area of mammography might be used to support clinical experts in the diagnosis of images derived from imaging modalities such as endoscopy. In the recent past, a few first approaches for computer assisted endoscopy have been presented already. These systems use a video signal as an input that is provided by the endoscopes video processor. Despite the advent of high-definition systems most standard endoscopy systems today still provide only analog video signals. These signals consist of interlaced images that can not be used in a CAD approach without deinterlacing. Of course, there are many different deinterlacing approaches known today. But most of them are specializations of some basic approaches. In this paper we present four basic deinterlacing approaches. We have used a database of non-interlaced images which have been degraded by artificial interlacing and afterwards processed by these approaches. The database contains regions of interest (ROI) of clinical relevance for the diagnosis of abnormalities in the esophagus. We compared the classification rates on these ROIs on the original images and after the deinterlacing. The results show that the deinterlacing has an impact on the classification rates. The Bobbing approach and the Motion Compensation approach achieved the best classification results in most cases.

  8. CCCT - NCTN Steering Committees - Clinical Imaging

    Cancer.gov

    The Clinical Imaging Steering Committee serves as a forum for the extramural imaging and oncology communities to provide strategic input to the NCI regarding its significant investment in imaging activities in clinical trials.

  9. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  10. The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor

    NASA Astrophysics Data System (ADS)

    Benton, J. R.; Corbett, F.; Tuft, R.

    1980-01-01

    An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.

  11. Image segmentation via foreground and background semantic descriptors

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Qiang, Jingjing; Yin, Jihao

    2017-09-01

    In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.

  12. When Can Information from Ordinal Scale Variables Be Integrated?

    ERIC Educational Resources Information Center

    Kemp, Simon; Grace, Randolph C.

    2010-01-01

    Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…

  13. Microcomputer Simulation of a Fourier Approach to Optical Wave Propagation

    DTIC Science & Technology

    1992-06-01

    and transformed input in transform domain). 44 Figure 21. SHFTOUTPUT1 ( inverse transform of product of Bessel filter and transformed input). . . . 44...Figure 22. SHFT OUTPUT2 ( inverse transform of product of ,derivative filter and transformed input).. 45 Figure 23. •tIFT OUTPUT (sum of SHFTOUTPUT1...52 Figure 33. SHFT OUTPUT1 at time slice 1 ( inverse transform of product of Bessel filter and transformed input) .... ............. ... 53

  14. Geochemical Constraints for Mercury's PCA-Derived Geochemical Terranes

    NASA Astrophysics Data System (ADS)

    Stockstill-Cahill, K. R.; Peplowski, P. N.

    2018-05-01

    PCA-derived geochemical terranes provide a robust, analytical means of defining these terranes using strictly geochemical inputs. Using the end members derived in this way, we are able to assess the geochemical implications for Mercury.

  15. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  16. Automatic dynamic range adjustment for ultrasound B-mode imaging.

    PubMed

    Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo

    2015-02-01

    In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  18. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  19. Development and Validation of the Suprathreshold Stochastic Resonance-Based Image Processing Method for the Detection of Abdomino-pelvic Tumor on PET/CT Scans.

    PubMed

    Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.

  20. Agro-hydrology and multi-temporal high-resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-12-01

    The growing availability of high-resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the possibilities offered for improving crop-growth dynamic simulation with the distributed agro-hydrological model: topography-based nitrogen transfer and transformation (TNT2). We used a leaf area index (LAI) map series derived from 105 Formosat-2 (F2) images covering the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated against discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2005-2010 data set (climate, land use, agricultural practices, and discharge and nitrate fluxes at the outlet). Data from the first year (2005) were used to initialize the hydrological model. A priori agricultural practices obtained from an extensive field survey, such as seeding date, crop cultivar, and amount of fertilizer, were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop-field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics using the a priori input parameters displayed temporal shifts from those observed LAI profiles that are irregularly distributed in space (between field crops) and time (between years). By resetting the seeding date at the crop-field level, we have developed an optimization method designed to efficiently minimize this temporal shift and better fit the crop growth against both the spatial observations and crop production. This optimization of simulated LAI has a negligible impact on water budgets at the catchment scale (1 mm yr-1 on average) but a noticeable impact on in-stream nitrogen fluxes (around 12%), which is of interest when considering nitrate stream contamination issues and the objectives of TNT2 modeling. This study demonstrates the potential contribution of the forthcoming high spatial and temporal resolution products from the Sentinel-2 satellite mission for improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  1. Agro-hydrology and multi temporal high resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-07-01

    The recent and forthcoming availability of high resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the perspective offered by improving the crop growth dynamic simulation using the distributed agro-hydrological model, Topography based Nitrogen transfer and Transformation (TNT2), using LAI map series derived from 105 Formosat-2 (F2) images during the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated with discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2006-2010 dataset (climate, land use, agricultural practices, discharge and nitrate fluxes at the outlet). A priori agricultural practices obtained from an extensive field survey such as seeding date, crop cultivar, and fertilizer amount were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics with a priori input parameters showed an temporal shift with observed LAI profiles irregularly distributed in space (between field crops) and time (between years). By re-setting seeding date at the crop field level, we proposed an optimization method to minimize efficiently this temporal shift and better fit the crop growth against the spatial observations as well as crop production. This optimization of simulated LAI has a negligible impact on water budget at the catchment scale (1 mm yr-1 in average) but a noticeable impact on in-stream nitrogen fluxes (around 12%) which is of interest considering nitrate stream contamination issues and TNT2 model objectives. This study demonstrates the contribution of forthcoming high spatial and temporal resolution products of Sentinel-2 satellite mission in improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  2. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  3. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuo, J; Su, K; Department of Radiology, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, Ohio

    Purpose: Accurate and robust photon attenuation derived from MR is essential for PET/MR and MR-based radiation treatment planning applications. Although the fuzzy C-means (FCM) algorithm has been applied for pseudo-CT generation, the input feature combination and the number of clusters have not been optimized. This study aims to optimize both for clinically practical pseudo-CT generation. Methods: Nine volunteers were recruited. A 190-second, single-acquisition UTE-mDixon with 25% (angular) sampling and 3D radial readout was performed to acquire three primitive MR features at TEs of 0.1, 1.5, and 2.8 ms: the free-induction-decay (FID), the first and the second echo images. Three derivedmore » images, Dixon-fat and Dixon-water generated by two-point Dixon water/fat separation, and R2* (1/T2*) map, were also created. To identify informative inputs for generating a pseudo-CT image volume, all 63 combinations, choosing one to six of the feature images, were used as inputs to FCM for pseudo-CT generation. Further, the number of clusters was varied from four to seven to find the optimal approach. Mean prediction deviation (MPD), mean absolute prediction deviation (MAPD), and correlation coefficient (R) of different combinations were compared for feature selection. Results: Among the 63 feature combinations, the four that resulted in the best MAPD and R were further compared along with the set containing all six features. The results suggested that R2* and Dixon-water are the most informative features. Further, including FID also improved the performance of pseudo-CT generation. Consequently, the set containing FID, Dixon-water, and R2* resulted in the most accurate, robust pseudo-CT when the number of cluster equals to five (5C). The clusters were interpreted as air, fat, bone, brain, and fluid. The six-cluster Result additionally included bone marrow. Conclusion: The results suggested that FID, Dixon-water, R2* are the most important features. The findings can be used to facilitate pseudo-CT generation for unsupervised clustering. Please note that the project was completed with partial funding from the Ohio Department of Development grant TECH 11-063 and a sponsored research agreement with Philips Healthcare that is managed by Case Western Reserve University. As noted in the affiliations, some of the authors are Philips employees.« less

  5. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  6. Air-to-Air Target Acquisition: Factors and Means of Improvement.

    DTIC Science & Technology

    1980-03-01

    and an atmospheric modelo B t - Bb VISTARAQ C - b derived from input quantities and an B atmospheric model Bt Bb RAE/BAC C B derived from input...seair ’ h task in a !static, fielId. I" Osi c r r, I’lut ion is, dependi- ent upon the angle of’ t hr vi:i x is, -ft wh iih -wilil1y is measured and th

  7. Wh-Questions in Child L2 French: Derivational Complexity and Its Interactions with L1 Properties, Length of Exposure, Age of Exposure, and the Input

    ERIC Educational Resources Information Center

    Prévost, Philippe; Strik, Nelleke; Tuller, Laurie

    2014-01-01

    This study investigates how derivational complexity interacts with first language (L1) properties, second language (L2) input, age of first exposure to the target language, and length of exposure in child L2 acquisition. We compared elicited production of "wh"-questions in French in two groups of 15 participants each, one with L1 English…

  8. Analytically-derived sensitivities in one-dimensional models of solute transport in porous media

    USGS Publications Warehouse

    Knopman, D.S.

    1987-01-01

    Analytically-derived sensitivities are presented for parameters in one-dimensional models of solute transport in porous media. Sensitivities were derived by direct differentiation of closed form solutions for each of the odel, and by a time integral method for two of the models. Models are based on the advection-dispersion equation and include adsorption and first-order chemical decay. Boundary conditions considered are: a constant step input of solute, constant flux input of solute, and exponentially decaying input of solute at the upstream boundary. A zero flux is assumed at the downstream boundary. Initial conditions include a constant and spatially varying distribution of solute. One model simulates the mixing of solute in an observation well from individual layers in a multilayer aquifer system. Computer programs produce output files compatible with graphics software in which sensitivities are plotted as a function of either time or space. (USGS)

  9. Equations For Rotary Transformers

    NASA Technical Reports Server (NTRS)

    Salomon, Phil M.; Wiktor, Peter J.; Marchetto, Carl A.

    1988-01-01

    Equations derived for input impedance, input power, and ratio of secondary current to primary current of rotary transformer. Used for quick analysis of transformer designs. Circuit model commonly used in textbooks on theory of ac circuits.

  10. A Comparison Between SST and AOT Derived from AVHRR and MODIS Data in the Frame of the CREPAD Program

    NASA Astrophysics Data System (ADS)

    Robles-Gonzalez, Cristina; Fernandez-Renau, Alix; Lopez Gordillo, Noelia; Sevilla, Angel Garcia; Suarez, Juana Santana

    2010-12-01

    Since 1997, the INTA-CREPAD (Centre for REception, Processing, Archiving and Dissemination of Earth Observation Data) program distributes freely some of the most demanded low-resolution remote sensing products: SST, Ocean Chl-a, NDVI, AOD... The data input for such products are captured at the Canary Space Station (Centro Espacial de Canarias, CEC). The data sensors received at the station and used in the CREPAD program are AVHRR, SEAWIFS and MODIS. In this study SST and AOD retrieved by CREPAD algorithms from AVHRR and the SEADAS derived SST and AOD from MODIS have compared. SST values agree very well within 0.1±0.5oC and the coefficient of correlation of the images is 0.9. AOD validation gives good results taking into account the differences in the algorithms used. Mean AOD difference at 0.630 μm is 0.01±0.05 and the correlation coefficient is 0.6.

  11. An engineering study of hybrid adaptation of wind tunnel walls for three-dimensional testing

    NASA Technical Reports Server (NTRS)

    Brown, Clinton; Kalumuck, Kenneth; Waxman, David

    1987-01-01

    Solid wall tunnels having only upper and lower walls flexing are described. An algorithm for selecting the wall contours for both 2 and 3 dimensional wall flexure is presented and numerical experiments are used to validate its applicability to the general test case of 3 dimensional lifting aircraft models in rectangular cross section wind tunnels. The method requires an initial approximate representation of the model flow field at a given lift with wallls absent. The numerical methods utilized are derived by use of Green's source solutions obtained using the method of images; first order linearized flow theory is employed with Prandtl-Glauert compressibility transformations. Equations are derived for the flexed shape of a simple constant thickness plate wall under the influence of a finite number of jacks in an axial row along the plate centerline. The Green's source methods are developed to provide estimations of residual flow distortion (interferences) with measured wall pressures and wall flow inclinations as inputs.

  12. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)

    2010-01-01

    An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.

  13. Glacier Surface Lowering and Stagnation in the Manaslu Region of Nepal

    NASA Astrophysics Data System (ADS)

    Robson, B. A.; Nuth, C.; Nielsen, P. R.; Hendrickx, M.; Dahl, S. O.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  14. Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.

    PubMed

    Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi

    2016-07-01

    Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Data-driven optimal binning for respiratory motion management in PET.

    PubMed

    Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A

    2018-01-01

    Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.

  16. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement

    PubMed Central

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344

  17. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  18. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.

    PubMed

    Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi

    2017-08-01

    The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  19. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  20. Dual energy CT kidney stone differentiation in photon counting computed tomography

    NASA Astrophysics Data System (ADS)

    Gutjahr, R.; Polster, C.; Henning, A.; Kappler, S.; Leng, S.; McCollough, C. H.; Sedlmair, M. U.; Schmidt, B.; Krauss, B.; Flohr, T. G.

    2017-03-01

    This study evaluates the capabilities of a whole-body photon counting CT system to differentiate between four common kidney stone materials, namely uric acid (UA), calcium oxalate monohydrate (COM), cystine (CYS), and apatite (APA) ex vivo. Two different x-ray spectra (120 kV and 140 kV) were applied and two acquisition modes were investigated. The macro-mode generates two energy threshold based image-volumes and two energy bin based image-volumes. In the chesspattern-mode four energy thresholds are applied. A virtual low energy image, as well as a virtual high energy image are derived from initial threshold-based images, while considering their statistically correlated nature. The energy bin based images of the macro-mode, as well as the virtual low and high energy image of the chesspattern-mode serve as input for our dual energy evaluation. The dual energy ratio of the individually segmented kidney stones were utilized to quantify the discriminability of the different materials. The dual energy ratios of the two acquisition modes showed high correlation for both applied spectra. Wilcoxon-rank sum tests and the evaluation of the area under the receiver operating characteristics curves suggest that the UA kidney stones are best differentiable from all other materials (AUC = 1.0), followed by CYS (AUC ≍ 0.9 compared against COM and APA). COM and APA, however, are hardly distinguishable (AUC between 0.63 and 0.76). The results hold true for the measurements of both spectra and both acquisition modes.

  1. Dynamic cardiac PET imaging: extraction of time-activity curves using ICA and a generalized Gaussian distribution model.

    PubMed

    Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi

    2013-01-01

    Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.

  2. High-cut characteristics of the baroreflex neural arc preserve baroreflex gain against pulsatile pressure.

    PubMed

    Kawada, Toru; Zheng, Can; Yanagiya, Yusuke; Uemura, Kazunori; Miyamoto, Tadayoshi; Inagaki, Masashi; Shishido, Toshiaki; Sugimachi, Masaru; Sunagawa, Kenji

    2002-03-01

    A transfer function from baroreceptor pressure input to sympathetic nerve activity (SNA) shows derivative characteristics in the frequency range below 0.8 Hz in rabbits. These derivative characteristics contribute to a quick and stable arterial pressure (AP) regulation. However, if the derivative characteristics hold up to heart rate frequency, the pulsatile pressure input will yield a markedly augmented SNA signal. Such a signal would saturate the baroreflex signal transduction, thereby disabling the baroreflex regulation of AP. We hypothesized that the transfer gain at heart rate frequency would be much smaller than that predicted from extrapolating the derivative characteristics. In anesthetized rabbits (n = 6), we estimated the neural arc transfer function in the frequency range up to 10 Hz. The transfer gain was lost at a rate of -20 dB/decade when the input frequency exceeded 0.8 Hz. A numerical simulation indicated that the high-cut characteristics above 0.8 Hz were effective to attenuate the pulsatile signal and preserve the open-loop gain when the baroreflex dynamic range was finite.

  3. Galaxy And Mass Assembly (GAMA): end of survey report and data release 2

    NASA Astrophysics Data System (ADS)

    Liske, J.; Baldry, I. K.; Driver, S. P.; Tuffs, R. J.; Alpaslan, M.; Andrae, E.; Brough, S.; Cluver, M. E.; Grootes, M. W.; Gunawardhana, M. L. P.; Kelvin, L. S.; Loveday, J.; Robotham, A. S. G.; Taylor, E. N.; Bamford, S. P.; Bland-Hawthorn, J.; Brown, M. J. I.; Drinkwater, M. J.; Hopkins, A. M.; Meyer, M. J.; Norberg, P.; Peacock, J. A.; Agius, N. K.; Andrews, S. K.; Bauer, A. E.; Ching, J. H. Y.; Colless, M.; Conselice, C. J.; Croom, S. M.; Davies, L. J. M.; De Propris, R.; Dunne, L.; Eardley, E. M.; Ellis, S.; Foster, C.; Frenk, C. S.; Häußler, B.; Holwerda, B. W.; Howlett, C.; Ibarra, H.; Jarvis, M. J.; Jones, D. H.; Kafle, P. R.; Lacey, C. G.; Lange, R.; Lara-López, M. A.; López-Sánchez, Á. R.; Maddox, S.; Madore, B. F.; McNaught-Roberts, T.; Moffett, A. J.; Nichol, R. C.; Owers, M. S.; Palamara, D.; Penny, S. J.; Phillipps, S.; Pimbblet, K. A.; Popescu, C. C.; Prescott, M.; Proctor, R.; Sadler, E. M.; Sansom, A. E.; Seibert, M.; Sharp, R.; Sutherland, W.; Vázquez-Mata, J. A.; van Kampen, E.; Wilkins, S. M.; Williams, R.; Wright, A. H.

    2015-09-01

    The Galaxy And Mass Assembly (GAMA) survey is one of the largest contemporary spectroscopic surveys of low redshift galaxies. Covering an area of ˜286 deg2 (split among five survey regions) down to a limiting magnitude of r < 19.8 mag, we have collected spectra and reliable redshifts for 238 000 objects using the AAOmega spectrograph on the Anglo-Australian Telescope. In addition, we have assembled imaging data from a number of independent surveys in order to generate photometry spanning the wavelength range 1 nm-1 m. Here, we report on the recently completed spectroscopic survey and present a series of diagnostics to assess its final state and the quality of the redshift data. We also describe a number of survey aspects and procedures, or updates thereof, including changes to the input catalogue, redshifting and re-redshifting, and the derivation of ultraviolet, optical and near-infrared photometry. Finally, we present the second public release of GAMA data. In this release, we provide input catalogue and targeting information, spectra, redshifts, ultraviolet, optical and near-infrared photometry, single-component Sérsic fits, stellar masses, Hα-derived star formation rates, environment information, and group properties for all galaxies with r < 19.0 mag in two of our survey regions, and for all galaxies with r < 19.4 mag in a third region (72 225 objects in total). The data base serving these data is available at http://www.gama-survey.org/.

  4. Method and apparatus for eliminating coherent noise in a coherent energy imaging system without destroying spatial coherence

    NASA Technical Reports Server (NTRS)

    Shulman, A. R. (Inventor)

    1971-01-01

    A method and apparatus for substantially eliminating noise in a coherent energy imaging system, and specifically in a light imaging system of the type having a coherent light source and at least one image lens disposed between an input signal plane and an output image plane are, discussed. The input signal plane is illuminated with the light source by rotating the lens about its optical axis. In this manner, the energy density of coherent noise diffraction patterns as produced by imperfections such as dust and/or bubbles on and/or in the lens is distributed over a ring-shaped area of the output image plane and reduced to a point wherein it can be ignored. The spatial filtering capability of the coherent imaging system is not affected by this noise elimination technique.

  5. Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images

    NASA Astrophysics Data System (ADS)

    Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow

    2018-04-01

    Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.

  6. Development of a Multilayer MODIS IST-Albedo Product of Greenland

    NASA Technical Reports Server (NTRS)

    Hall, D. K.; Comiso, J. C.; Cullather, R. I.; Digirolamo, N. E.; Nowicki, S. M.; Medley, B. C.

    2017-01-01

    A new multilayer IST-albedo Moderate Resolution Imaging Spectroradiometer (MODIS) product of Greenland was developed to meet the needs of the ice sheet modeling community. The multiple layers of the product enable the relationship between IST and albedo to be evaluated easily. Surface temperature is a fundamental input for dynamical ice sheet models because it is a component of the ice sheet radiation budget and mass balance. Albedo influences absorption of incoming solar radiation. The daily product will combine the existing standard MODIS Collection-6 ice-surface temperature, derived melt maps, snow albedo and water vapor products. The new product is available in a polar stereographic projection in NetCDF format. The product will ultimately extend from March 2000 through the end of 2017.

  7. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    NASA Astrophysics Data System (ADS)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  8. Deep neural networks for texture classification-A theoretical analysis.

    PubMed

    Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant

    2018-01-01

    We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Using virtual data for training deep model for hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  10. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  11. Retrieval of land surface temperature (LST) from landsat TM6 and TIRS data by single channel radiative transfer algorithm using satellite and ground-based inputs

    NASA Astrophysics Data System (ADS)

    Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj

    2017-06-01

    The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.

  12. Color constancy using bright-neutral pixels

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2014-03-01

    An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.

  13. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  14. Dynamic Contrast-enhanced MR Imaging in Renal Cell Carcinoma: Reproducibility of Histogram Analysis on Pharmacokinetic Parameters

    PubMed Central

    Wang, Hai-yi; Su, Zi-hua; Xu, Xiao; Sun, Zhi-peng; Duan, Fei-xue; Song, Yuan-yuan; Li, Lu; Wang, Ying-wei; Ma, Xin; Guo, Ai-tao; Ma, Lin; Ye, Hui-yi

    2016-01-01

    Pharmacokinetic parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) have been increasingly used to evaluate the permeability of tumor vessel. Histogram metrics are a recognized promising method of quantitative MR imaging that has been recently introduced in analysis of DCE-MRI pharmacokinetic parameters in oncology due to tumor heterogeneity. In this study, 21 patients with renal cell carcinoma (RCC) underwent paired DCE-MRI studies on a 3.0 T MR system. Extended Tofts model and population-based arterial input function were used to calculate kinetic parameters of RCC tumors. Mean value and histogram metrics (Mode, Skewness and Kurtosis) of each pharmacokinetic parameter were generated automatically using ImageJ software. Intra- and inter-observer reproducibility and scan–rescan reproducibility were evaluated using intra-class correlation coefficients (ICCs) and coefficient of variation (CoV). Our results demonstrated that the histogram method (Mode, Skewness and Kurtosis) was not superior to the conventional Mean value method in reproducibility evaluation on DCE-MRI pharmacokinetic parameters (K trans & Ve) in renal cell carcinoma, especially for Skewness and Kurtosis which showed lower intra-, inter-observer and scan-rescan reproducibility than Mean value. Our findings suggest that additional studies are necessary before wide incorporation of histogram metrics in quantitative analysis of DCE-MRI pharmacokinetic parameters. PMID:27380733

  15. Sarnoff JND Vision Model for Flat-Panel Design

    NASA Technical Reports Server (NTRS)

    Brill, Michael H.; Lubin, Jeffrey

    1998-01-01

    This document describes adaptation of the basic Sarnoff JND Vision Model created in response to the NASA/ARPA need for a general-purpose model to predict the perceived image quality attained by flat-panel displays. The JND model predicts the perceptual ratings that humans will assign to a degraded color-image sequence relative to its nondegraded counterpart. Substantial flexibility is incorporated into this version of the model so it may be used to model displays at the sub-pixel and sub-frame level. To model a display (e.g., an LCD), the input-image data can be sampled at many times the pixel resolution and at many times the digital frame rate. The first stage of the model downsamples each sequence in time and in space to physiologically reasonable rates, but with minimum interpolative artifacts and aliasing. Luma and chroma parts of the model generate (through multi-resolution pyramid representation) a map of differences-between test and reference called the JND map, from which a summary rating predictor is derived. The latest model extensions have done well in calibration against psychophysical data and against image-rating data given a CRT-based front-end. THe software was delivered to NASA Ames and is being integrated with LCD display models at that facility,

  16. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M [Albuquerque, NM; Wehlburg, Christine M [Albuquerque, NM; Wehlburg, Joseph C [Albuquerque, NM; Smith, Mark W [Albuquerque, NM; Smith, Jody L [Albuquerque, NM

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  19. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  20. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  1. Automated reference-free detection of motion artifacts in magnetic resonance images.

    PubMed

    Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios

    2018-04-01

    Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.

  2. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  3. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  4. Sources of plant-derived carbon and stability of organic matter in soil: Implications for global change

    Treesearch

    Susan E. Crow; Kate Lajtha; Timothy R. Filley; Chris Swanston; Richard D. Bowden; Bruce A. Caldwell

    2009-01-01

    Alterations in forest productivity and changes in the relative proportion of above- and belowground biomass may have nonlinear effects on soil organic matter (SOM) storage. To study the influence of plant litter inputs on SOM accumulation, the Detritus Input Removal and Transfer (DIRT) Experiment continuously alters above- and belowground plant inputs to soil by a...

  5. Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Maas, Casper A.; Beek, Frederik J. A.; Viergever, Max A.

    2003-05-01

    The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.

  6. Automated discrimination of dicentric and monocentric chromosomes by machine learning-based image processing.

    PubMed

    Li, Yanxin; Knoll, Joan H; Wilkins, Ruth C; Flegal, Farrah N; Rogan, Peter K

    2016-05-01

    Dose from radiation exposure can be estimated from dicentric chromosome (DC) frequencies in metaphase cells of peripheral blood lymphocytes. We automated DC detection by extracting features in Giemsa-stained metaphase chromosome images and classifying objects by machine learning (ML). DC detection involves (i) intensity thresholded segmentation of metaphase objects, (ii) chromosome separation by watershed transformation and elimination of inseparable chromosome clusters, fragments and staining debris using a morphological decision tree filter, (iii) determination of chromosome width and centreline, (iv) derivation of centromere candidates, and (v) distinction of DCs from monocentric chromosomes (MC) by ML. Centromere candidates are inferred from 14 image features input to a Support Vector Machine (SVM). Sixteen features derived from these candidates are then supplied to a Boosting classifier and a second SVM which determines whether a chromosome is either a DC or MC. The SVM was trained with 292 DCs and 3135 MCs, and then tested with cells exposed to either low (1 Gy) or high (2-4 Gy) radiation dose. Results were then compared with those of 3 experts. True positive rates (TPR) and positive predictive values (PPV) were determined for the tuning parameter, σ. At larger σ, PPV decreases and TPR increases. At high dose, for σ = 1.3, TPR = 0.52 and PPV = 0.83, while at σ = 1.6, the TPR = 0.65 and PPV = 0.72. At low dose and σ = 1.3, TPR = 0.67 and PPV = 0.26. The algorithm differentiates DCs from MCs, overlapped chromosomes and other objects with acceptable accuracy over a wide range of radiation exposures. © 2016 Wiley Periodicals, Inc.

  7. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  8. Synaptic inputs from stroke-injured brain to grafted human stem cell-derived neurons activated by sensory stimuli.

    PubMed

    Tornero, Daniel; Tsupykov, Oleg; Granmo, Marcus; Rodriguez, Cristina; Grønning-Hansen, Marita; Thelin, Jonas; Smozhanik, Ekaterina; Laterza, Cecilia; Wattananit, Somsak; Ge, Ruimin; Tatarishvili, Jemal; Grealish, Shane; Brüstle, Oliver; Skibo, Galina; Parmar, Malin; Schouenborg, Jens; Lindvall, Olle; Kokaia, Zaal

    2017-03-01

    Transplanted neurons derived from stem cells have been proposed to improve function in animal models of human disease by various mechanisms such as neuronal replacement. However, whether the grafted neurons receive functional synaptic inputs from the recipient's brain and integrate into host neural circuitry is unknown. Here we studied the synaptic inputs from the host brain to grafted cortical neurons derived from human induced pluripotent stem cells after transplantation into stroke-injured rat cerebral cortex. Using the rabies virus-based trans-synaptic tracing method and immunoelectron microscopy, we demonstrate that the grafted neurons receive direct synaptic inputs from neurons in different host brain areas located in a pattern similar to that of neurons projecting to the corresponding endogenous cortical neurons in the intact brain. Electrophysiological in vivo recordings from the cortical implants show that physiological sensory stimuli, i.e. cutaneous stimulation of nose and paw, can activate or inhibit spontaneous activity in grafted neurons, indicating that at least some of the afferent inputs are functional. In agreement, we find using patch-clamp recordings that a portion of grafted neurons respond to photostimulation of virally transfected, channelrhodopsin-2-expressing thalamo-cortical axons in acute brain slices. The present study demonstrates, for the first time, that the host brain regulates the activity of grafted neurons, providing strong evidence that transplanted human induced pluripotent stem cell-derived cortical neurons can become incorporated into injured cortical circuitry. Our findings support the idea that these neurons could contribute to functional recovery in stroke and other conditions causing neuronal loss in cerebral cortex. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. An algorithm to estimate building heights from Google street-view imagery using single view metrology across a representational state transfer system

    NASA Astrophysics Data System (ADS)

    Díaz, Elkin; Arguello, Henry

    2016-05-01

    Urban ecosystem studies require monitoring, controlling and planning to analyze building density, urban density, urban planning, atmospheric modeling and land use. In urban planning, there are many methods for building height estimation using optical remote sensing images. These methods however, highly depend on sun illumination and cloud-free weather. In contrast, high resolution synthetic aperture radar provides images independent from daytime and weather conditions, although, these images rely on special hardware and expensive acquisition. Most of the biggest cities around the world have been photographed by Google street view under different conditions. Thus, thousands of images from the principal streets of a city can be accessed online. The availability of this and similar rich city imagery such as StreetSide from Microsoft, represents huge opportunities in computer vision because these images can be used as input in many applications such as 3D modeling, segmentation, recognition and stereo correspondence. This paper proposes a novel algorithm to estimate building heights using public Google Street-View imagery. The objective of this work is to obtain thousands of geo-referenced images from Google Street-View using a representational state transfer system, and estimate their average height using single view metrology. Furthermore, the resulting measurements and image metadata are used to derive a layer of heights in a Google map available online. The experimental results show that the proposed algorithm can estimate an accurate average building height map of thousands of images using Google Street-View Imagery of any city.

  10. Geomechanical Anisotropy and Rock Fabric in Shales

    NASA Astrophysics Data System (ADS)

    Huffman, K. A.; Connolly, P.; Thornton, D. A.

    2017-12-01

    Digital rock physics (DRP) is an emerging area of qualitative and quantitative scientific analysis that has been employed on a variety of rock types at various scales to characterize petrophysical, mechanical, and hydraulic rock properties. This contribution presents a generic geomechanically focused DRP workflow involving image segmentation by geomechanical constituents, generation of finite element (FE) meshes, and application of various boundary conditions (i.e. at the edge of the domain and at boundaries of various components such as edges of individual grains). The generic workflow enables use of constituent geological objects and relationships in a computational based approach to address specific questions in a variety of rock types at various scales. Two examples are 1) modeling stress dependent permeability, where it occurs and why it occurs at the grain scale; 2) simulating the path and complexity of primary fractures and matrix damage in materials with minerals or intervals of different mechanical behavior. Geomechanical properties and fabric characterization obtained from 100 micron shale SEM images using the generic DRP workflow are presented. Image segmentation and development of FE simulation composed of relatively simple components (elastic materials, frictional contacts) and boundary conditions enable the determination of bulk static elastic properties. The procedure is repeated for co-located images at pertinent orientations to determine mechanical anisotropy. The static moduli obtained are benchmarked against lab derived measurements since material properties (esp. frictional ones) are poorly constrained at the scale of investigation. Once confidence in the input material parameters is gained, the procedure can be used to characterize more samples (i.e. images) than is possible from rock samples alone. Integration of static elastic properties with grain statistics and geologic (facies) conceptual models derived from core and geophysical logs enables quantification of the impact that variations in rock fabric and grain interactions have on bulk mechanical rock behavior. When considered in terms of the stratigraphic framework of two different shale reservoirs it is found that silica distribution, clay content and orientation play a first order role in mechanical anisotropy.

  11. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  12. Using a C4 Invasive Grass to Isolate the Role of Detrital Carbon versus Rhizodeposit Carbon in Supplying Soil Carbon Pools

    NASA Astrophysics Data System (ADS)

    Sokol, N.; Bradford, M.

    2016-12-01

    Plant inputs are the primary sources of carbon (C) to soil organic carbon (SOC) pools. Historically, detrital plant sources were thought to dominate C supply to SOC pools. An emerging body of research highlights the previously underestimated role of root exudates and other rhizodeposits. However, few experimental field studies have directly tracked the relative contributions of rhizodeposits versus detritial C inputs into different SOC pools, due to how methodologically challenging they are to measure in a field setting. Here, I present the first 3 years of data from an experimental field study of the prolific, C4 invasive grass species Microstegium vimineum. I use its unique isotopic signature in plots manipulated to contain detrital-only and rhizodeposit-only inputs, to track their relative contributions into microbial biomass C, particulate organic C (POC; >53 um) and mineral-associated organic C (MIN C; <53 um) soil pools. After 3 years, the presence of M. vimineum significantly affected both total SOC and the proportion of M. vimineum-derived C in POC pools. Both detrital inputs and rhizodeposit inputs from M. vimineum caused an increase in total SOC. Total SOC was 38% greater in detrital-only plots compared to control plots, and 39% greater in rhizodeposit-only plots compared to control plots. The proportion of M. vimineum-derived C in the POC was pool was 32% greater in rhizodeposit-only plots compared to detrital-only plots. The proportion of M.vimineum-derived C in the MIN C pool was not significantly different between treatments (at p<0.05). Microbial biomass was highest in rhizodeposit-only plots (p=0.03). Overall, plots containing rhizodeposit-only inputs contributed more Microstegium-derived C than did plots containing detrital-only inputs. While this observation is consistent with emerging theory on the primacy of the belowground, root-associated pathway in supplying C to soil C pools, this increase is generally assumed to be through the MIN C pool due to 1) the lower molecular weight of rhizodeposit compounds, and 2) the close physical association between rhizodeposits and soil mineral surfaces. Our results point to an underappreciated, central role of the POM C pool as a passageway for both detrital and rhizodeposit C inputs to the soil.

  13. Auto and hetero-associative memory using a 2-D optical logic gate

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1992-01-01

    An optical system for auto-associative and hetero-associative recall utilizing Hamming distance as the similarity measure between a binary input image vector V(sup k) and a binary image vector V(sup m) in a first memory array using an optical Exclusive-OR gate for multiplication of each of a plurality of different binary image vectors in memory by the input image vector. After integrating the light of each product V(sup k) x V(sup m), a shortest Hamming distance detection electronics module determines which product has the lowest light intensity and emits a signal that activates a light emitting diode to illuminate a corresponding image vector in a second memory array for display. That corresponding image vector is identical to the memory image vector V(sup m) in the first memory array for auto-associative recall or related to it, such as by name, for hetero-associative recall.

  14. Adapting radiotherapy to hypoxic tumours

    NASA Astrophysics Data System (ADS)

    Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag

    2006-10-01

    In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.

  15. A practical introduction to skeletons for the plant sciences1

    PubMed Central

    Bucksch, Alexander

    2014-01-01

    Before the availability of digital photography resulting from the invention of charged couple devices in 1969, the measurement of plant architecture was a manual process either on the plant itself or on traditional photographs. The introduction of cheap digital imaging devices for the consumer market enabled the wide use of digital images to capture the shape of plant networks such as roots, tree crowns, or leaf venation. Plant networks contain geometric traits that can establish links to genetic or physiological characteristics, support plant breeding efforts, drive evolutionary studies, or serve as input to plant growth simulations. Typically, traits are encoded in shape descriptors that are computed from imaging data. Skeletons are one class of shape descriptors that are used to describe the hierarchies and extent of branching and looping plant networks. While the mathematical understanding of skeletons is well developed, their application within the plant sciences remains challenging because the quality of the measurement depends partly on the interpretation of the skeleton. This article is meant to bridge the skeletonization literature in the plant sciences and related technical fields by discussing best practices for deriving diameters and approximating branching hierarchies in a plant network. PMID:25202645

  16. Improvement in detection of small wildfires

    NASA Astrophysics Data System (ADS)

    Sleigh, William J.

    1991-12-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  17. Improvement in detection of small wildfires

    NASA Technical Reports Server (NTRS)

    Sleigh, William J.

    1991-01-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  18. Application of tolerance limits to the characterization of image registration performance.

    PubMed

    Fedorov, Andriy; Wells, William M; Kikinis, Ron; Tempany, Clare M; Vangel, Mark G

    2014-07-01

    Deformable image registration is used increasingly in image-guided interventions and other applications. However, validation and characterization of registration performance remain areas that require further study. We propose an analysis methodology for deriving tolerance limits on the initial conditions for deformable registration that reliably lead to a successful registration. This approach results in a concise summary of the probability of registration failure, while accounting for the variability in the test data. The (β, γ) tolerance limit can be interpreted as a value of the input parameter that leads to successful registration outcome in at least 100β% of cases with the 100γ% confidence. The utility of the methodology is illustrated by summarizing the performance of a deformable registration algorithm evaluated in three different experimental setups of increasing complexity. Our examples are based on clinical data collected during MRI-guided prostate biopsy registered using publicly available deformable registration tool. The results indicate that the proposed methodology can be used to generate concise graphical summaries of the experiments, as well as a probabilistic estimate of the registration outcome for a future sample. Its use may facilitate improved objective assessment, comparison and retrospective stress-testing of deformable.

  19. Digital map databases in support of avionic display systems

    NASA Astrophysics Data System (ADS)

    Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.

    1991-08-01

    The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.

  20. Smoke detection

    DOEpatents

    Warmack, Robert J. Bruce; Wolf, Dennis A.; Frank, Steven Shane

    2016-09-06

    Various apparatus and methods for smoke detection are disclosed. In one embodiment, a method of training a classifier for a smoke detector comprises inputting sensor data from a plurality of tests into a processor. The sensor data is processed to generate derived signal data corresponding to the test data for respective tests. The derived signal data is assigned into categories comprising at least one fire group and at least one non-fire group. Linear discriminant analysis (LDA) training is performed by the processor. The derived signal data and the assigned categories for the derived signal data are inputs to the LDA training. The output of the LDA training is stored in a computer readable medium, such as in a smoke detector that uses LDA to determine, based on the training, whether present conditions indicate the existence of a fire.

  1. Smoke detection

    DOEpatents

    Warmack, Robert J. Bruce; Wolf, Dennis A.; Frank, Steven Shane

    2015-10-27

    Various apparatus and methods for smoke detection are disclosed. In one embodiment, a method of training a classifier for a smoke detector comprises inputting sensor data from a plurality of tests into a processor. The sensor data is processed to generate derived signal data corresponding to the test data for respective tests. The derived signal data is assigned into categories comprising at least one fire group and at least one non-fire group. Linear discriminant analysis (LDA) training is performed by the processor. The derived signal data and the assigned categories for the derived signal data are inputs to the LDA training. The output of the LDA training is stored in a computer readable medium, such as in a smoke detector that uses LDA to determine, based on the training, whether present conditions indicate the existence of a fire.

  2. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  3. P-glycoprotein (ABCB1) inhibits the influx and increases the efflux of 11C-metoclopramide across the blood-brain barrier: a PET study on non-human primates.

    PubMed

    Auvity, Sylvain; Caillé, Fabien; Marie, Solène; Wimberley, Catriona; Bauer, Martin; Langer, Oliver; Buvat, Irène; Goutal, Sébastien; Tournier, Nicolas

    2018-05-10

    Rationale : PET imaging using radiolabeled high-affinity substrates of P-glycoprotein (ABCB1) has convincingly revealed the role of this major efflux transporter in limiting the influx of its substrates from blood into the brain across the blood-brain barrier (BBB). Many drugs, such as metoclopramide, are weak ABCB1 substrates and distribute into the brain even when ABCB1 is fully functional. In this study, we used kinetic modeling and validated simplified methods to highlight and quantify the impact of ABCB1 on the BBB influx and efflux of 11 C-metoclopramide, as a model weak ABCB1 substrate, in non-human primates. Methods : The regional brain kinetics of a tracer dose of 11 C-metoclopramide (298 ± 44 MBq) were assessed in baboons using PET without (n = 4) or with intravenous co-infusion of the ABCB1 inhibitor tariquidar (4 mg/kg/h, n = 4). Metabolite-corrected arterial input functions were generated to estimate the regional volume of distribution ( V T ) as well as the influx ( K 1 ) and efflux ( k 2 ) rate constants, using a one-tissue compartment model. Modeling outcome parameters were correlated with image-derived parameters, i.e. area under the curve AUC 0-30 min and AUC 30-60 min (SUV.min) as well as the elimination slope (k E ; min -1 ) from 30 to 60 min of the regional time-activity curves. Results : Tariquidar significantly increased the brain distribution of 11 C-metoclopramide ( V T = 4.3 ± 0.5 mL/cm 3 and 8.7 ± 0.5 mL/cm 3 for baseline and ABCB1 inhibition conditions, respectively, P<0.001), with a 1.28-fold increase in K 1 (P < 0.05) and a 1.64-fold decrease in k 2 (P < 0.001). The effect of tariquidar was homogeneous across different brain regions. The most sensitive parameters to ABCB1 inhibition were V T (2.02-fold increase) and AUC 30-60 min (2.02-fold increase). V T was significantly (P < 0.0001) correlated with AUC 30-60 min (r 2 = 0.95), AUC 0-30 min (r 2 = 0.87) and k E (r 2 = 0.62). Conclusion : 11 C-metoclopramide PET imaging revealed the relative importance of both the influx hindrance and efflux enhancement components of ABCB1 in a relevant model of the human BBB. The overall impact of ABCB1 on drug delivery to the brain can be non-invasively estimated from image-derived outcome parameters without the need for an arterial input function. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  5. Estimation of vegetation parameters such as Leaf Area Index from polarimetric SAR data

    NASA Astrophysics Data System (ADS)

    Hetz, Marina; Blumberg, Dan G.; Rotman, Stanley R.

    2010-05-01

    This work presents the analysis of the capability to use the radar backscatter coefficient in semi-arid zones to estimate the vegetation crown in terms of Leaf Area Index (LAI). The research area is characterized by the presence of a pine forest with shrubs as an underlying vegetation layer (understory), olive trees, natural grove areas and eucalyptus trees. The research area was imaged by an airborne RADAR system in L-band during February 2009. The imagery includes multi-look radar images. All the images were fully polarized i.e., HH, VV, HV polarizations. For this research we used the central azimuth angle (113° ). We measured LAI using the ?T Sun Scan Canopy Analysis System. Verification was done by analytic calculations and digital methods for the leaf's and needle's surface area. In addition, we estimated the radar extinction coefficient of the vegetation volume by comparing point calibration targets (trihedral corner reflectors with 150cm side length) within and without the canopy. The radar extinction in co- polarized images was ~26dB and ~24dB for pines and olives respectively, compared to the same calibration target outside the vegetation. We used smaller trihedral corner reflectors (41cm side length) and covered them with vegetation to measure the correlation between vegetation density, LAI and radar backscatter coefficient for pines and olives under known conditions. An inverse correlation between the radar backscatter coefficient of the trihedral corner reflectors covered by olive branches and the LAI of those branches was observed. The correlation between LAI and the optical transmittance was derived using the Beer-Lambert law. In addition, comparing this law's principle to the principle of the radar backscatter coefficient production, we derived the equation that connects between the radar backscatter coefficient and LAI. After extracting the radar backscatter coefficient of forested areas, all the vegetation parameters were used as inputs for the MIMICS model that simulates the radar backscatter coefficient of pines. The model results show a backscatter of -18dB in HV polarization which is 13dB higher than the mean pines backscatter in the radar images, whereas the co-polarized images revealed a backscatter of -10dB which is 23dB higher than the actual backscatter value deriver from the radar images. Therefore, next step in the research will incorporate other vegetation parameters and attempt to understand the discrepancies between the simulation and the actual data.

  6. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  7. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  8. Transurethral light delivery for prostate photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Lediju Bell, Muyinatu A.; Guo, Xiaoyu; Song, Danny Y.; Boctor, Emad M.

    2015-03-01

    Photoacoustic imaging has broad clinical potential to enhance prostate cancer detection and treatment, yet it is challenged by the lack of minimally invasive, deeply penetrating light delivery methods that provide sufficient visualization of targets (e.g., tumors, contrast agents, brachytherapy seeds). We constructed a side-firing fiber prototype for transurethral photoacoustic imaging of prostates with a dual-array (linear and curvilinear) transrectal ultrasound probe. A method to calculate the surface area and, thereby, estimate the laser fluence at this fiber tip was derived, validated, applied to various design parameters, and used as an input to three-dimensional Monte Carlo simulations. Brachytherapy seeds implanted in phantom, ex vivo, and in vivo canine prostates at radial distances of 5 to 30 mm from the urethra were imaged with the fiber prototype transmitting 1064 nm wavelength light with 2 to 8 mJ pulse energy. Prebeamformed images were displayed in real time at a rate of 3 to 5 frames per second to guide fiber placement and beamformed offline. A conventional delay-and-sum beamformer provided decreasing seed contrast (23 to 9 dB) with increasing urethra-to-target distance, while the short-lag spatial coherence beamformer provided improved and relatively constant seed contrast (28 to 32 dB) regardless of distance, thus improving multitarget visualization in single and combined curvilinear images acquired with the fiber rotating and the probe fixed. The proposed light delivery and beamforming methods promise to improve key prostate cancer detection and treatment strategies.

  9. Target acquisition modeling over the exact optical path: extending the EOSTAR TDA with the TOD sensor performance model

    NASA Astrophysics Data System (ADS)

    Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.

    2017-10-01

    The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.

  10. MARSTHERM: A Web-based System Providing Thermophysical Analysis Tools for Mars Research

    NASA Astrophysics Data System (ADS)

    Putzig, N. E.; Barratt, E. M.; Mellon, M. T.; Michaels, T. I.

    2013-12-01

    We introduce MARSTHERM, a web-based system that will allow researchers access to a standard numerical thermal model of the Martian near-surface and atmosphere. In addition, the system will provide tools for the derivation, mapping, and analysis of apparent thermal inertia from temperature observations by the Mars Global Surveyor Thermal Emission Spectrometer (TES) and the Mars Odyssey Thermal Emission Imaging System (THEMIS). Adjustable parameters for the thermal model include thermal inertia, albedo, surface pressure, surface emissivity, atmospheric dust opacity, latitude, surface slope angle and azimuth, season (solar longitude), and time steps for calculations and output. The model computes diurnal surface and brightness temperatures for either a single day or a full Mars year. Output options include text files and plots of seasonal and diurnal surface, brightness, and atmospheric temperatures. The tools for the derivation and mapping of apparent thermal inertia from spacecraft data are project-based, wherein the user provides an area of interest (AOI) by specifying latitude and longitude ranges. The system will then extract results within the AOI from prior global mapping of elevation (from the Mars Orbiter Laser Altimeter, for calculating surface pressure), TES annual albedo, and TES seasonal and annual-mean 2AM and 2PM apparent thermal inertia (Putzig and Mellon, 2007, Icarus 191, 68-94). In addition, a history of TES dust opacity within the AOI is computed. For each project, users may then provide a list of THEMIS images to process for apparent thermal inertia, optionally overriding the TES-derived dust opacity with a fixed value. Output from the THEMIS derivation process includes thumbnail and context images, GeoTIFF raster data, and HDF5 files containing arrays of input and output data (radiance, brightness temperature, apparent thermal inertia, elevation, quality flag, latitude, and longitude) and ancillary information. As a demonstration of capabilities, we will present results from a thermophysical study of Gale Crater (Barratt and Putzig, 2013, EPSC abstract 613), for which TES and THEMIS mapping has been carried out during system development. Public access to the MARSTHERM system will be provided in conjunction with the 2013 AGU Fall Meeting and will feature the numerical thermal model and thermal-inertia derivation algorithm developed by Mellon et al. (2000, Icarus 148, 437-455) as modified by Putzig and Mellon (2007, Icarus 191, 68-94). Updates to the thermal model and derivation algorithm that include a more sophisticated representation of the atmosphere and a layered subsurface are presently in development, and these will be incorporated into the system when they are available. Other planned enhancements include tools for modeling temperatures from horizontal mixtures of materials and slope facets, for comparing heterogeneity modeling results to TES and THEMIS results, and for mosaicking THEMIS images.

  11. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    NASA Astrophysics Data System (ADS)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.

  12. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a largemore » number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss their uncertainties and the precision of the ridge-to-mode correction schemes, through a detailed assessment of the sensitivity of the model to its input set. The precision of the ridge-to-mode correction is indicative of any possible residual systematic biases in the inferred mode characteristics. In our conclusions, we address how to further improve these estimates, and the implications for other data sets, like GONG+ and HMI.« less

  13. High-order motor cortex in rats receives somatosensory inputs from the primary motor cortex via cortico-cortical pathways.

    PubMed

    Kunori, Nobuo; Takashima, Ichiro

    2016-12-01

    The motor cortex of rats contains two forelimb motor areas; the caudal forelimb area (CFA) and the rostral forelimb area (RFA). Although the RFA is thought to correspond to the premotor and/or supplementary motor cortices of primates, which are higher-order motor areas that receive somatosensory inputs, it is unknown whether the RFA of rats receives somatosensory inputs in the same manner. To investigate this issue, voltage-sensitive dye (VSD) imaging was used to assess the motor cortex in rats following a brief electrical stimulation of the forelimb. This procedure was followed by intracortical microstimulation (ICMS) mapping to identify the motor representations in the imaged cortex. The combined use of VSD imaging and ICMS revealed that both the CFA and RFA received excitatory synaptic inputs after forelimb stimulation. Further evaluation of the sensory input pathway to the RFA revealed that the forelimb-evoked RFA response was abolished either by the pharmacological inactivation of the CFA or a cortical transection between the CFA and RFA. These results suggest that forelimb-related sensory inputs would be transmitted to the RFA from the CFA via the cortico-cortical pathway. Thus, the present findings imply that sensory information processed in the RFA may be used for the generation of coordinated forelimb movements, which would be similar to the function of the higher-order motor cortex in primates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  15. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  16. Distribution and bioavailability of cadmium in ornithogenic coral-sand sediments of the Xisha archipelago, South China Sea.

    PubMed

    Liu, Xiaodong; Lou, Chuangneng; Xu, Liqiang; Sun, Liguang

    2012-09-01

    Total cadmium (Cd) concentrations in four ornithogenic coral-sand sedimentary profiles displayed a strong positive correlation with guano-derived phosphorus, but had no correlation with plant-originated organic matter in the top sediments. These results indicate that the total Cd distributions were predominantly controlled by guano input. Bioavailable Cd and zinc (Zn) had a greater input rate in the top sediments with respect to total Cd and total Zn, and a positive correlation with total organic carbon (TOC) derived from plant humus. Multi-regression analysis showed that the total Cd and TOC explained over 80% of the variation of bioavailable Cd, suggesting that both guano and plant inputs could significantly influence the distribution of bioavailable Cd, and that plant biocycling processes contribute more to the recent increase of bioavailable Cd. A pollution assessment indicates that the Yongle archipelago is moderately to strongly polluted with guano-derived Cd. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Real-Time Stability and Control Derivative Extraction From F-15 Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.

    2003-01-01

    A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.

  18. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-10-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment, particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the catchment area as model input. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increases as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data are often unavailable. Modern unmanned aerial vehicles (UAVs) allow one to acquire high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility of deriving high-resolution imperviousness maps for urban areas from UAV imagery and of using this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is proposed and evaluated in a state-of-the-art urban drainage modelling exercise. In a real-life case study (Lucerne, Switzerland), we compare imperviousness maps generated using a fixed-wing consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their overall accuracy, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyse the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and runoff volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated from UAV images processed with modern classification methods achieve an accuracy comparable to standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on predicted surface runoff and pipe flows, when traditional workflows are used. We expect that they will have a substantial influence when more detailed modelling approaches are employed to characterize land use and to predict surface runoff. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility of flexibly acquiring up-to-date aerial images at a quality compared with off-the-shelf image products and a competitive price at the same time. We believe that in the future, urban drainage models representing a higher degree of spatial detail will fully benefit from the strengths of UAV imagery.

  19. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  20. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  1. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control

    NASA Astrophysics Data System (ADS)

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-01

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  2. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control.

    PubMed

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-31

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  3. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  4. Lexical Morphology: Structure, Process, and Development

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Taran, Valentina L.

    2013-01-01

    Recent work has demonstrated the importance of derivational morphology to later language development and has led to a consensus that derivation is a lexical process. In this review, derivational morphology is discussed in terms of lexical representation models from both linguistic and psycholinguistic perspectives. Input characteristics, including…

  5. Deformable Image Registration based on Similarity-Steered CNN Regression.

    PubMed

    Cao, Xiaohuan; Yang, Jianhua; Zhang, Jun; Nie, Dong; Kim, Min-Jeong; Wang, Qian; Shen, Dinggang

    2017-09-01

    Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.

  6. Semantic Image Segmentation with Contextual Hierarchical Models.

    PubMed

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  7. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  8. Forest above ground biomass estimation and forest/non-forest classification for Odisha, India, using L-band Synthetic Aperture Radar (SAR) data

    NASA Astrophysics Data System (ADS)

    Suresh, M.; Kiran Chand, T. R.; Fararoda, R.; Jha, C. S.; Dadhwal, V. K.

    2014-11-01

    Tropical forests contribute to approximately 40 % of the total carbon found in terrestrial biomass. In this context, forest/non-forest classification and estimation of forest above ground biomass over tropical regions are very important and relevant in understanding the contribution of tropical forests in global biogeochemical cycles, especially in terms of carbon pools and fluxes. Information on the spatio-temporal biomass distribution acts as a key input to Reducing Emissions from Deforestation and forest Degradation Plus (REDD+) action plans. This necessitates precise and reliable methods to estimate forest biomass and to reduce uncertainties in existing biomass quantification scenarios. The use of backscatter information from a host of allweather capable Synthetic Aperture Radar (SAR) systems during the recent past has demonstrated the potential of SAR data in forest above ground biomass estimation and forest / nonforest classification. In the present study, Advanced Land Observing Satellite (ALOS) / Phased Array L-band Synthetic Aperture Radar (PALSAR) data along with field inventory data have been used in forest above ground biomass estimation and forest / non-forest classification over Odisha state, India. The ALOSPALSAR 50 m spatial resolution orthorectified and radiometrically corrected HH/HV dual polarization data (digital numbers) for the year 2010 were converted to backscattering coefficient images (Schimada et al., 2009). The tree level measurements collected during field inventory (2009-'10) on Girth at Breast Height (GBH at 1.3 m above ground) and height of all individual trees at plot (plot size 0.1 ha) level were converted to biomass density using species specific allometric equations and wood densities. The field inventory based biomass estimations were empirically integrated with ALOS-PALSAR backscatter coefficients to derive spatial forest above ground biomass estimates for the study area. Further, The Support Vector Machines (SVM) based Radial Basis Function classification technique was employed to carry out binary (forest-non forest) classification using ALOSPALSAR HH and HV backscatter coefficient images and field inventory data. The textural Haralick's Grey Level Cooccurrence Matrix (GLCM) texture measures are determined on HV backscatter image for Odisha, for the year 2010. PALSAR HH, HV backscatter coefficient images, their difference (HHHV) and HV backscatter coefficient based eight textural parameters (Mean, Variance, Dissimilarity, Contrast, Angular second moment, Homogeneity, Correlation and Contrast) are used as input parameters for Support Vector Machines (SVM) tool. Ground based inputs for forest / non-forest were taken from field inventory data and high resolution Google maps. Results suggested significant relationship between HV backscatter coefficient and field based biomass (R2 = 0.508, p = 0.55) compared to HH with biomass values ranging from 5 to 365 t/ha. The spatial variability of biomass with reference to different forest types is in good agreement. The forest / nonforest classified map suggested a total forest cover of 50214 km2 with an overall accuracy of 92.54 %. The forest / non-forest information derived from the present study showed a good spatial agreement with the standard forest cover map of Forest Survey of India (FSI) and corresponding published area of 50575 km2. Results are discussed in the paper.

  9. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  10. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  11. The impact of radiocesium input forms on its extractability in Fukushima forest soils.

    PubMed

    Teramage, Mengistu T; Carasco, Loic; Orjollet, Daniel; Coppin, Frederic

    2018-05-05

    The effects of 137 Cs deposit forms on its ageing in soil have not yet been reported. Soluble and Solid 137 Cs input forms were mixed with the mineral soils collected under Fukushima's coniferous and broadleaf forests, incubated under controlled laboratory, and examined the evolution of 137 Cs availability over time. Results show that the extracted 137 Cs fraction with water was less than 1% for the soluble input form and below detection limit for the solid input forms. Likewise, with an acetate reagent, the extracted 137 Cs fraction ranged from 46 to 56% for the soluble input and from 2 to 15% for the solid input, implying that the nature of the 137 Cs contamination strongly influences its extractability and mobility in soil. Although the degradation of organic materials was apparent, its impact on the 137 Cs extractability was found to be weak. Nevertheless, more Ac-available 137 Cs was obtained from broadleaf organic material mixes than the coniferous counterparts, suggesting that the lignified nature of latter tend to retain more 137 Cs. When extrapolated to a field context, more available 137 Cs fraction may be expected from wet-derived contaminated forest soils than contaminated via solid-derived inputs. Such information could be helpful for radioecological management schemes in contaminated forest environments. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  13. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  14. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  15. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  16. Towards ontology-based decision support systems for complex ultrasound diagnosis in obstetrics and gynecology.

    PubMed

    Maurice, P; Dhombres, F; Blondiaux, E; Friszer, S; Guilbaud, L; Lelong, N; Khoshnood, B; Charlet, J; Perrot, N; Jauniaux, E; Jurkovic, D; Jouannic, J-M

    2017-05-01

    We have developed a new knowledge base intelligent system for obstetrics and gynecology ultrasound imaging, based on an ontology and a reference image collection. This study evaluates the new system to support accurate annotations of ultrasound images. We have used the early ultrasound diagnosis of ectopic pregnancies as a model clinical issue. The ectopic pregnancy ontology was derived from medical texts (4260 ultrasound reports of ectopic pregnancy from a specialist center in the UK and 2795 Pubmed abstracts indexed with the MeSH term "Pregnancy, Ectopic") and the reference image collection was built on a selection from 106 publications. We conducted a retrospective analysis of the signs in 35 scans of ectopic pregnancy by six observers using the new system. The resulting ectopic pregnancy ontology consisted of 1395 terms, and 80 images were collected for the reference collection. The observers used the knowledge base intelligent system to provide a total of 1486 sign annotations. The precision, recall and F-measure for the annotations were 0.83, 0.62 and 0.71, respectively. The global proportion of agreement was 40.35% 95% CI [38.64-42.05]. The ontology-based intelligent system provides accurate annotations of ultrasound images and suggests that it may benefit non-expert operators. The precision rate is appropriate for accurate input of a computer-based clinical decision support and could be used to support medical imaging diagnosis of complex conditions in obstetrics and gynecology. Copyright © 2017. Published by Elsevier Masson SAS.

  17. Scene segmentation of natural images using texture measures and back-propagation

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Phatak, Anil; Chatterji, Gano

    1993-01-01

    Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.

  18. AN INTEGRATED LANDSCAPE AND HYDROLOGICAL ASSESSMENT FOR THE YANTRA RIVER BASIN, BULGARIA

    EPA Science Inventory

    Geospatial data and relationships derived there from are the cornerstone of the landscape sciences. This information is also of fundamental importance in deriving parameter inputs to watershed hydrologic models.

  19. Validation of Cloud Optical Parameters from Passive Remote Sensing in the Arctic by using the Aircraft Measurements

    NASA Astrophysics Data System (ADS)

    Chen, H.; Schmidt, S.; Coddington, O.; Wind, G.; Bucholtz, A.; Segal-Rosenhaimer, M.; LeBlanc, S. E.

    2017-12-01

    Cloud Optical Parameters (COPs: e.g., cloud optical thickness and cloud effective radius) and surface albedo are the most important inputs for determining the Cloud Radiative Effect (CRE) at the surface. In the Arctic, the COPs derived from passive remote sensing such as from the Moderate Resolution Imaging Spectroradiometer (MODIS) are difficult to obtain with adequate accuracy owing mainly to insufficient knowledge about the snow/ice surface, but also because of the low solar zenith angle. This study aims to validate COPs derived from passive remote sensing in the Arctic by using aircraft measurements collected during two field campaigns based in Fairbanks, Alaska. During both experiments, ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) and ARISE (Arctic Radiation-IceBridge Sea and Ice Experiment), the Solar Spectral Flux Radiometer (SSFR) measured upwelling and downwelling shortwave spectral irradiances, which can be used to derive surface and cloud albedo, as well as the irradiance transmitted by clouds. We assess the variability of the Arctic sea ice/snow surfaces albedo through these aircraft measurements and incorporate this variability into cloud retrievals for SSFR. We then compare COPs as derived from SSFR and MODIS for all suitable aircraft underpasses of the satellites. Finally, the sensitivities of the COPs to surface albedo and solar zenith angle are investigated.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warmack, Robert J. Bruce; Wolf, Dennis A.; Frank, Steven Shane

    Various apparatus and methods for smoke detection are disclosed. In one embodiment, a method of training a classifier for a smoke detector comprises inputting sensor data from a plurality of tests into a processor. The sensor data is processed to generate derived signal data corresponding to the test data for respective tests. The derived signal data is assigned into categories comprising at least one fire group and at least one non-fire group. Linear discriminant analysis (LDA) training is performed by the processor. The derived signal data and the assigned categories for the derived signal data are inputs to the LDAmore » training. The output of the LDA training is stored in a computer readable medium, such as in a smoke detector that uses LDA to determine, based on the training, whether present conditions indicate the existence of a fire.« less

  1. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  2. Comparing WSA coronal and solar wind model predictions driven by line-of-sight and vector HMI ADAPT maps

    NASA Astrophysics Data System (ADS)

    Arge, C. N.; Henney, C. J.; Shurkin, K.; Wallace, S.

    2017-12-01

    As the primary input to nearly all coronal models, reliable estimates of the global solar photospheric magnetic field distribution are critical for accurate modeling and understanding of solar and heliospheric magnetic fields. The Air Force Data Assimilative Photospheric flux Transport (ADAPT) model generates synchronic (i.e., globally instantaneous) maps by evolving observed solar magnetic flux using relatively well understood transport processes when measurements are not available and then updating modeled flux with new observations (available from both the Earth and the far-side of the Sun) using data assimilation methods that rigorously take into account model and observational uncertainties. ADAPT is capable of assimilating line-of-sight and vector magnetic field data from all observatory sources including the expected photospheric vector magnetograms from the Polarimetric and Helioseismic Imager (PHI) on the Solar Orbiter, as well as those generated using helioseismic methods. This paper compares Wang-Sheeley-Arge (WSA) coronal and solar wind modeling results at Earth and STEREO A & B using ADAPT input model maps derived from both line-of-site and vector SDO/HMI magnetograms that include methods for incorporating observations of a large, newly emerged (July 2010) far-side active region (AR11087).

  3. Circle Hough transform implementation for dots recognition in braille cells

    NASA Astrophysics Data System (ADS)

    Jacinto Gómez, Edwar; Montiel Ariza, Holman; Martínez Sarmiento, Fredy Hernán.

    2017-02-01

    This paper shows a technique based on CHT (Circle Hough Transform) to achieve the optical Braille recognition (OBR). Unlike other papers developed around the same topic, this one is made by using Hough Transform to process the recognition and transcription of Braille cells, proving CHT to be an appropriate technique to go over different non-systematics factors who can affect the process, as the paper type where the text to traduce is placed, some lightning factors, input image resolution and some flaws derived from the capture process, which is realized using a scanner. Tests are performed with a local database using text generated by visual nondisabled people and some transcripts by sightless people; all of this with the support of National Institute for Blind People (INCI for their Spanish acronym) placed in Colombia.

  4. MARVIN: a medical research application framework based on open source software.

    PubMed

    Rudolph, Tobias; Puls, Marc; Anderegg, Christoph; Ebert, Lars; Broehan, Martina; Rudin, Adrian; Kowal, Jens

    2008-08-01

    This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

  5. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  6. Guano-Derived Nutrient Subsidies Drive Food Web Structure in Coastal Ponds.

    PubMed

    Vizzini, Salvatrice; Signa, Geraldina; Mazzola, Antonio

    2016-01-01

    A stable isotope study was carried out seasonally in three coastal ponds (Marinello system, Italy) affected by different gull guano input to investigate the effect of nutrient subsidies on food web structure and dynamics. A marked 15N enrichment occurred in the pond receiving the highest guano input, indicating that gull-derived fertilization (guanotrophication) had a strong localised effect and flowed across trophic levels. The main food web response to guanotrophication was an overall erosion of the benthic pathway in favour of the planktonic. Subsidized primary consumers, mostly deposit feeders, switched their diet according to organic matter source availability. Secondary consumers and, in particular, fish from the guanotrophic pond, acted as couplers of planktonic and benthic pathways and showed an omnivorous trophic behaviour. Food web structure showed substantial variability among ponds and a marked seasonality in the subsidized one: an overall simplification was evident only in summer when guano input maximises its trophic effects, while higher trophic diversity and complexity resulted when guano input was low to moderate.

  7. Guano-Derived Nutrient Subsidies Drive Food Web Structure in Coastal Ponds

    PubMed Central

    Vizzini, Salvatrice; Signa, Geraldina; Mazzola, Antonio

    2016-01-01

    A stable isotope study was carried out seasonally in three coastal ponds (Marinello system, Italy) affected by different gull guano input to investigate the effect of nutrient subsidies on food web structure and dynamics. A marked 15N enrichment occurred in the pond receiving the highest guano input, indicating that gull-derived fertilization (guanotrophication) had a strong localised effect and flowed across trophic levels. The main food web response to guanotrophication was an overall erosion of the benthic pathway in favour of the planktonic. Subsidized primary consumers, mostly deposit feeders, switched their diet according to organic matter source availability. Secondary consumers and, in particular, fish from the guanotrophic pond, acted as couplers of planktonic and benthic pathways and showed an omnivorous trophic behaviour. Food web structure showed substantial variability among ponds and a marked seasonality in the subsidized one: an overall simplification was evident only in summer when guano input maximises its trophic effects, while higher trophic diversity and complexity resulted when guano input was low to moderate. PMID:26953794

  8. Combustion-derived substances in deep basins of Puget Sound: historical inputs from fossil fuel and biomass combustion.

    PubMed

    Kuo, Li-Jung; Louchouarn, Patrick; Herbert, Bruce E; Brandenberger, Jill M; Wade, Terry L; Crecelius, Eric

    2011-04-01

    Reconstructions of 250 years historical inputs of two distinct types of black carbon (soot/graphitic black carbon (GBC) and char-BC) were conducted on sediment cores from two basins of the Puget Sound, WA. Signatures of polycyclic aromatic hydrocarbons (PAHs) were also used to support the historical reconstructions of BC to this system. Down-core maxima in GBC and combustion-derived PAHs occurred in the 1940s in the cores from the Puget Sound Main Basin, whereas in Hood Canal such peak was observed in the 1970s, showing basin-specific differences in inputs of combustion byproducts. This system showed relatively higher inputs from softwood combustion than the northeastern U.S. The historical variations in char-BC concentrations were consistent with shifts in climate indices, suggesting an influence of climate oscillations on wildfire events. Environmental loading of combustion byproducts thus appears as a complex function of urbanization, fuel usage, combustion technology, environmental policies, and climate conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  10. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  11. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  12. A novel bioreactor and culture method drives high yields of platelets from stem cells.

    PubMed

    Avanzi, Mauro P; Oluwadara, Oluwasijibomi E; Cushing, Melissa M; Mitchell, Maxwell L; Fischer, Stephen; Mitchell, W Beau

    2016-01-01

    Platelet (PLT) transfusion is the primary treatment for thrombocytopenia. PLTs are obtained exclusively from volunteer donors, and the PLT product has only a 5-day shelf life, which can limit supply and result in PLT shortages. PLTs derived from stem cells could help to fill this clinical need. However, current culture methods yield far too few PLTs for clinical application. To address this need, a defined, serum-free culture method was designed using a novel bioreactor to increase the yield of PLTs from stem cell-derived megakaryocytes. CD34 cells isolated from umbilical cord blood were expanded with a variety of reagents and on a nanofiber membrane using serum-free medium. These cells were then differentiated into megakaryocytic lineage by culturing with thrombopoietin and stem cell factor in serum-free conditions. Polyploidy was induced by addition of Rho kinase inhibitor or actin polymerization inhibitor to the CD41 cells. A novel bioreactor was developed that recapitulated aspects of the marrow vascular niche. Polyploid megakaryocytes that were subjected to flow in the bioreactor extended proPLTs and shed PLTs, as confirmed by light microscopy, fluorescence imaging, and flow cytometry. CD34 cells were expanded 100-fold. CD41 cells were expanded 100-fold. Up to 100 PLTs per input megakaryocyte were produced from the bioreactor, for an overall yield of 10(6) PLTs per input CD34 cell. The PLTs externalized P-selectin after activation. Functional PLTs can be produced ex vivo on a clinically relevant scale using serum-free culture conditions with a novel stepwise approach and an innovative bioreactor. © 2015 AABB.

  13. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  14. Real-Time Estimation of Volcanic ASH/SO2 Cloud Height from Combined Uv/ir Satellite Observations and Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Vicente, Gilberto A.

    An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.

  15. Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.

    PubMed

    Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B

    2018-05-15

    In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales

    NASA Astrophysics Data System (ADS)

    Ghosh, Aniruddha; Fassnacht, Fabian Ewald; Joshi, P. K.; Koch, Barbara

    2014-02-01

    Knowledge of tree species distribution is important worldwide for sustainable forest management and resource evaluation. The accuracy and information content of species maps produced using remote sensing images vary with scale, sensor (optical, microwave, LiDAR), classification algorithm, verification design and natural conditions like tree age, forest structure and density. Imaging spectroscopy reduces the inaccuracies making use of the detailed spectral response. However, the scale effect still has a strong influence and cannot be neglected. This study aims to bridge the knowledge gap in understanding the scale effect in imaging spectroscopy when moving from 4 to 30 m pixel size for tree species mapping, keeping in mind that most current and future hyperspectral satellite based sensors work with spatial resolution around 30 m or more. Two airborne (HyMAP) and one spaceborne (Hyperion) imaging spectroscopy dataset with pixel sizes of 4, 8 and 30 m, respectively were available to examine the effect of scale over a central European forest. The forest under examination is a typical managed forest with relatively homogenous stands featuring mostly two canopy layers. Normalized digital surface model (nDSM) derived from LiDAR data was used additionally to examine the effect of height information in tree species mapping. Six different sets of predictor variables (reflectance value of all bands, selected components of a Minimum Noise Fraction (MNF), Vegetation Indices (VI) and each of these sets combined with LiDAR derived height) were explored at each scale. Supervised kernel based (Support Vector Machines) and ensemble based (Random Forest) machine learning algorithms were applied on the dataset to investigate the effect of the classifier. Iterative bootstrap-validation with 100 iterations was performed for classification model building and testing for all the trials. For scale, analysis of overall classification accuracy and kappa values indicated that 8 m spatial resolution (reaching kappa values of over 0.83) slightly outperformed the results obtained from 4 m for the study area and five tree species under examination. The 30 m resolution Hyperion image produced sound results (kappa values of over 0.70), which in some areas of the test site were comparable with the higher spatial resolution imagery when qualitatively assessing the map outputs. Considering input predictor sets, MNF bands performed best at 4 and 8 m resolution. Optical bands were found to be best for 30 m spatial resolution. Classification with MNF as input predictors produced better visual appearance of tree species patches when compared with reference maps. Based on the analysis, it was concluded that there is no significant effect of height information on tree species classification accuracies for the present framework and study area. Furthermore, in the examined cases there was no single best choice among the two classifiers across scales and predictors. It can be concluded that tree species mapping from imaging spectroscopy for forest sites comparable to the one under investigation is possible with reliable accuracies not only from airborne but also from spaceborne imaging spectroscopy datasets.

  17. Transcranial Assessment and Visualization of Acoustic Cavitation: Modeling and Experimental Validation

    PubMed Central

    Clement, Gregory T.; McDannold, Nathan

    2015-01-01

    The interaction of ultrasonically-controlled microbubble oscillations (acoustic cavitation) with tissues and biological media has been shown to induce a wide range of bioeffects that may have significant impact to therapy and diagnosis of central nervous system diseases and disorders. However, the inherently non-linear microbubble oscillations combined with the micrometer and microsecond scales involved in these interactions and the limited methods to assess and visualize them transcranially hinder both their optimal use and translation to the clinics. To overcome these challenges, we present a noninvasive and clinically relevant framework that combines numerical simulations with multimodality imaging to assess and visualize the microbubble oscillations transcranially. In the present work, acoustic cavitation was studied with an integrated US and MR imaging guided clinical FUS system in non-human primates. This multimodality imaging system allowed us to concurrently induce and visualize acoustic cavitation transcranially. A high-resolution brain CT-scan that allowed us to determine the head acoustic properties (density, speed of sound, and absorption) was also co-registered to the US and MR images. The derived acoustic properties and the location of the targets that were determined by the 3D-CT scans and the post sonication MRI respectively were then used as inputs to two-and three-dimensional Finite Difference Time Domain (2D, 3D-FDTD) simulations that matched the experimental conditions and geometry. At the experimentally-determined target locations, synthetic point sources with pressure amplitude traces derived by either a Gaussian function or the output of a microbubble dynamics model were numerically excited and propagated through the skull towards a virtual US imaging array. Then, using passive acoustic mapping that was refined to incorporate variable speed of sound, we assessed the losses and aberrations induced by the skull as a function of the acoustic emissions recorded by the virtual US imaging array. Next, the simulated passive acoustic maps (PAMs) were compared to experimental PAMs. Finally, using clinical CT and MR imaging as input to the numerical simulations, we evaluated the clinical utility of the proposed framework. The simulations indicated that the diverging pressure waves propagating through the skull lose 95% of their intensity as compared to propagation in water-only. Further, the incorporation of a variable speed of sound to the PAM back-projection algorithm indeed corrected the aberrations introduced by the skull and substantially improved the resolution. More than 94% agreement in the FWHM of the axial and transverse line profiles between the simulations incorporating microbubble emissions and experimentally-determined PAMs was observed. Finally, the results of the 2D simulations that used clinical datasets are promising for the prospective use of transcranial PAM in a human with an 82 mm aperture broadband linear array. Incorporation of variable speed of sound to the PAM back-projection algorithm appeared capable of correcting the aberrations introduced by the human skull. These results suggest that this integrated approach can provide a physically accurate and clinically-relevant framework for developing a comprehensive treatment guidance for therapeutic applications of acoustic cavitation in the brain. Ultimately it may enable the quantification of the emissions and provide more control over this nonlinear process. PMID:25546857

  18. Using NDVI to measure precipitation in semi-arid landscapes

    USGS Publications Warehouse

    Birtwhistle, Amy N.; Laituri, Melinda; Bledsoe, Brian; Friedman, Jonathan M.

    2016-01-01

    Measuring precipitation in semi-arid landscapes is important for understanding the processes related to rainfall and run-off; however, measuring precipitation accurately can often be challenging especially within remote regions where precipitation instruments are scarce. Typically, rain-gauges are sparsely distributed and research comparing rain-gauge and RADAR precipitation estimates reveal that RADAR data are often misleading, especially for monsoon season convective storms. This study investigates an alternative way to map the spatial and temporal variation of precipitation inputs along ephemeral stream channels using Normalized Difference Vegetation Index (NDVI) derived from Landsat Thematic Mapper imagery. NDVI values from 26 years of pre- and post-monsoon season Landsat imagery were derived across Yuma Proving Ground (YPG), a region covering 3,367 km2 of semiarid landscapes in southwestern Arizona, USA. The change in NDVI from a pre-to post-monsoon season image along ephemeral stream channels explained 73% of the variance in annual monsoonal precipitation totals from a nearby rain-gauge. In addition, large seasonal changes in NDVI along channels were useful in determining when and where flow events have occurred.

  19. Color image enhancement based on particle swarm optimization with Gaussian mixture

    NASA Astrophysics Data System (ADS)

    Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho

    2015-01-01

    This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.

  20. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  1. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  2. The Microwave Radiative Properties of Falling Snow Derived from Nonspherical Ice Particle Models. Part II: Initial Testing Using Radar, Radiometer and In Situ Observations

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Tian, Lin; Grecu, Mircea; Kuo, Kwo-Sen; Johnson, Benjamin; Heymsfield, Andrew J.; Bansemer, Aaron; Heymsfield, Gerald M.; Wang, James R.; Meneghini, Robert

    2016-01-01

    In this study, two different particle models describing the structure and electromagnetic properties of snow are developed and evaluated for potential use in satellite combined radar-radiometer precipitation estimation algorithms. In the first model, snow particles are assumed to be homogeneous ice-air spheres with single-scattering properties derived from Mie theory. In the second model, snow particles are created by simulating the self-collection of pristine ice crystals into aggregate particles of different sizes, using different numbers and habits of the collected component crystals. Single-scattering properties of the resulting nonspherical snow particles are determined using the discrete dipole approximation. The size-distribution-integrated scattering properties of the spherical and nonspherical snow particles are incorporated into a dual-wavelength radar profiling algorithm that is applied to 14- and 34-GHz observations of stratiform precipitation from the ER-2 aircraft-borne High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP) radar. The retrieved ice precipitation profiles are then input to a forward radiative transfer calculation in an attempt to simulate coincident radiance observations from the Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR). Much greater consistency between the simulated and observed CoSMIR radiances is obtained using estimated profiles that are based upon the nonspherical crystal/aggregate snow particle model. Despite this greater consistency, there remain some discrepancies between the higher moments of the HIWRAP-retrieved precipitation size distributions and in situ distributions derived from microphysics probe observations obtained from Citation aircraft underflights of the ER-2. These discrepancies can only be eliminated if a subset of lower-density crystal/aggregate snow particles is assumed in the radar algorithm and in the interpretation of the in situ data.

  3. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  4. Comparison of NMR simulations of porous media derived from analytical and voxelized representations.

    PubMed

    Jin, Guodong; Torres-Verdín, Carlos; Toumelin, Emmanuel

    2009-10-01

    We develop and compare two formulations of the random-walk method, grain-based and voxel-based, to simulate the nuclear-magnetic-resonance (NMR) response of fluids contained in various models of porous media. The grain-based approach uses a spherical grain pack as input, where the solid surface is analytically defined without an approximation. In the voxel-based approach, the input is a computer-tomography or computer-generated image of reconstructed porous media. Implementation of the two approaches is largely the same, except for the representation of porous media. For comparison, both approaches are applied to various analytical and digitized models of porous media: isolated spherical pore, simple cubic packing of spheres, and random packings of monodisperse and polydisperse spheres. We find that spin magnetization decays much faster in the digitized models than in their analytical counterparts. The difference in decay rate relates to the overestimation of surface area due to the discretization of the sample; it cannot be eliminated even if the voxel size decreases. However, once considering the effect of surface-area increase in the simulation of surface relaxation, good quantitative agreement is found between the two approaches. Different grain or pore shapes entail different rates of increase of surface area, whereupon we emphasize that the value of the "surface-area-corrected" coefficient may not be universal. Using an example of X-ray-CT image of Fontainebleau rock sample, we show that voxel size has a significant effect on the calculated surface area and, therefore, on the numerically simulated magnetization response.

  5. Towards integrated modelling: full image simulations for WEAVE

    NASA Astrophysics Data System (ADS)

    Dalton, Gavin; Ham, Sun Jeong; Trager, Scott; Abrams, Don Carlos; Bonifacio, Piercarlo; Aguerri, J. A. L.; Middleton, Kevin; Benn, Chris; Rogers, Kevin; Stuik, Remko; Carrasco, Esperanza; Vallenari, Antonella; Jin, Shoko; Lewis, Jim

    2016-08-01

    We present an integrated end-end simulation of the spectral images that will be obtained by the weave spectrograph, which aims to include full modelling of all effects from the top of the atmosphere to the detector. These data are based in input spectra from a combination of library spectra and synthetic models, and will be used to provide inputs for an endend test of the full weave data pipeline and archive systems, prior to 1st light of the instrument.

  6. Global Swath and Gridded Data Tiling

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.

    2012-01-01

    This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.

  7. Microlens array processor with programmable weight mask and direct optical input

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  8. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  9. Multi-center prediction of hemorrhagic transformation in acute ischemic stroke using permeability imaging features.

    PubMed

    Scalzo, Fabien; Alger, Jeffry R; Hu, Xiao; Saver, Jeffrey L; Dani, Krishna A; Muir, Keith W; Demchuk, Andrew M; Coutts, Shelagh B; Luby, Marie; Warach, Steven; Liebeskind, David S

    2013-07-01

    Permeability images derived from magnetic resonance (MR) perfusion images are sensitive to blood-brain barrier derangement of the brain tissue and have been shown to correlate with subsequent development of hemorrhagic transformation (HT) in acute ischemic stroke. This paper presents a multi-center retrospective study that evaluates the predictive power in terms of HT of six permeability MRI measures including contrast slope (CS), final contrast (FC), maximum peak bolus concentration (MPB), peak bolus area (PB), relative recirculation (rR), and percentage recovery (%R). Dynamic T2*-weighted perfusion MR images were collected from 263 acute ischemic stroke patients from four medical centers. An essential aspect of this study is to exploit a classifier-based framework to automatically identify predictive patterns in the overall intensity distribution of the permeability maps. The model is based on normalized intensity histograms that are used as input features to the predictive model. Linear and nonlinear predictive models are evaluated using a cross-validation to measure generalization power on new patients and a comparative analysis is provided for the different types of parameters. Results demonstrate that perfusion imaging in acute ischemic stroke can predict HT with an average accuracy of more than 85% using a predictive model based on a nonlinear regression model. Results also indicate that the permeability feature based on the percentage of recovery performs significantly better than the other features. This novel model may be used to refine treatment decisions in acute stroke. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  11. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  12. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    PubMed

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  13. Robust photometric invariant features from the color tensor.

    PubMed

    van de Weijer, Joost; Gevers, Theo; Smeulders, Arnold W M

    2006-01-01

    Luminance-based features are widely used as low-level input for computer vision applications, even when color data is available. The extension of feature detection to the color domain prevents information loss due to isoluminance and allows us to exploit the photometric information. To fully exploit the extra information in the color data, the vector nature of color data has to be taken into account and a sound framework is needed to combine feature and photometric invariance theory. In this paper, we focus on the structure tensor, or color tensor, which adequately handles the vector nature of color images. Further, we combine the features based on the color tensor with photometric invariant derivatives to arrive at photometric invariant features. We circumvent the drawback of unstable photometric invariants by deriving an uncertainty measure to accompany the photometric invariant derivatives. The uncertainty is incorporated in the color tensor, hereby allowing the computation of robust photometric invariant features. The combination of the photometric invariance theory and tensor-based features allows for detection of a variety of features such as photometric invariant edges, corners, optical flow, and curvature. The proposed features are tested for noise characteristics and robustness to photometric changes. Experiments show that the proposed features are robust to scene incidental events and that the proposed uncertainty measure improves the applicability of full invariants.

  14. Ocean color remote sensing of turbid plumes in the southern California coastal waters during storm events

    NASA Astrophysics Data System (ADS)

    Lahet, Florence; Stramski, Dariusz

    2007-09-01

    Water-leaving radiance data obtained from MODIS-Aqua satellite images at spatial resolution of 250 m (band 1 at 645 nm) and 500 m (band 4 at 555 nm) were used to analyze the correlation between plume area and rainfall during strong storm events in coastal waters of Southern California. Our study is focused on the area between Point Loma and the US-Mexican border in San Diego, which is influenced by terrigenous input of particulate and dissolved materials from San Diego and Tijuana watersheds and non-point sources along the shore. For several events of intense rainstorms that occurred in the winter of 2004-2005, we carried out a correlational analysis between the satellite-derived plume area and rainfall parameters. We examined several rainfall parameters and methods for the estimation of plume area. We identified the optimal threshold values of satellite-derived normalized water-leaving radiances at 645 nm and 555 nm for distinguishing the plume from ambient ocean waters. The satellite-derived plume size showed high correlation with the amount of precipitated water accumulated during storm event over the San Diego and Tijuana watersheds. Our results support the potential of ocean color imagery with relatively high spatial resolution for the study of turbid plumes in the coastal ocean.

  15. Defining the uncertainty of electro-optical identification system performance estimates using a 3D optical environment derived from satellite

    NASA Astrophysics Data System (ADS)

    Ladner, S. D.; Arnone, R.; Casey, B.; Weidemann, A.; Gray, D.; Shulman, I.; Mahoney, K.; Giddings, T.; Shirron, J.

    2009-05-01

    Current United States Navy Mine-Counter-Measure (MCM) operations primarily use electro-optical identification (EOID) sensors to identify underwater targets after detection via acoustic sensors. These EOID sensors which are based on laser underwater imaging by design work best in "clear" waters and are limited in coastal waters especially with strong optical layers. Optical properties and in particular scattering and absorption play an important role on systems performance. Surface optical properties alone from satellite are not adequate to determine how well a system will perform at depth due to the existence of optical layers. The spatial and temporal characteristics of the 3d optical variability of the coastal waters along with strength and location of subsurface optical layers maximize chances of identifying underwater targets by exploiting optimum sensor deployment. Advanced methods have been developed to fuse the optical measurements from gliders, optical properties from "surface" satellite snapshot and 3-D ocean circulation models to extend the two-dimensional (2-D) surface satellite optical image into a three-dimensional (3-D) optical volume with subsurface optical layers. Modifications were made to an EOID performance model to integrate a 3-D optical volume covering an entire region of interest as input and derive system performance field. These enhancements extend present capability based on glider optics and EOID sensor models to estimate the system's "image quality". This only yields system performance information for a single glider profile location in a very large operational region. Finally, we define the uncertainty of the system performance by coupling the EOID performance model with the 3-D optical volume uncertainties. Knowing the ensemble spread of EOID performance field provides a new and unique capability for tactical decision makers and Navy Operations.

  16. Prewarping techniques in imaging: applications in nanotechnology and biotechnology

    NASA Astrophysics Data System (ADS)

    Poonawala, Amyn; Milanfar, Peyman

    2005-03-01

    In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.

  17. Violent Interaction Detection in Video Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  18. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Cybernetic group method of data handling (GMDH) statistical learning for hyperspectral remote sensing inverse problems in coastal ocean optics

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony Matthew

    For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables included bottom depth z b, chlorophyll a concentration [chl- a], spectral bottom irradiance reflectance Rb(lambda), and spectral total absorption a(lambda) and spectral total backscattering bb(lambda) coefficients. When applying the cybernetic and neural models to in situ HyperTSRB-derived Rrs, the difference in the means of the absolute error of the inversion estimates for zb was significant (alpha = 0.05). GMDH yielded significantly better zb than the ANN. The ANN model posted a mean absolute error (MAE) of 0.62214 m, compared with 0.55161 m for GMDH.

  20. Joint image restoration and location in visual navigation system

    NASA Astrophysics Data System (ADS)

    Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie

    2018-02-01

    Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.

  1. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  2. Automatic pH Control and Soluble and Insoluble Substrate Input for Continuous Culture of Rumen Microorganisms

    PubMed Central

    Slyter, Leonard L.

    1975-01-01

    An artifical rumen continuous culture with pH control, automated input of water-soluble and water-insoluble substrates, controlled mixing of contents, and a collection system for gas is described. Images PMID:16350029

  3. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    NASA Astrophysics Data System (ADS)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  4. Effects of empty bins on image upscaling in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2017-07-01

    This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.

  5. Genetic relationships in advanced generation hybrids derived from crosses between Texas and Kentucky bluegrass using ISSR markers

    USDA-ARS?s Scientific Manuscript database

    Fertile, advanced generation hybrids derived from crosses between Texas (Poa arachnifera Torr.) and Kentucky (Poa pratensis L.) bluegrass have been selected. The hybrids are currently being evaluated for low-input turf potential. Since they are derived from hand-harvested seed from first-generati...

  6. Flexible Work Group Methods in Apparel Manufacturing

    DTIC Science & Technology

    1993-04-01

    machine can take several. A real life example would be a machine that assembles skateboards . The input parts (wheels, trucks, deck) are different. At the...end of the operation, one kind of item comes out, an assembled skateboard . class source This is a derivative of sequentialmachine that has no input

  7. 77 FR 54902 - Proposed Information Collection; Comment Request; Input From Hawaii's Boat-based Anglers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-06

    ... Collection; Comment Request; Input From Hawaii's Boat-based Anglers AGENCY: National Oceanic and Atmospheric... Marine Recreational Information Program's National Data Standards. The State of Hawaii is developing a... (monitoring) survey of fishing catch and effort derived from Hawaii's private boaters--a required component of...

  8. Understanding the Behaviour of Infinite Ladder Circuits

    ERIC Educational Resources Information Center

    Ucak, C.; Yegin, K.

    2008-01-01

    Infinite ladder circuits are often encountered in undergraduate electrical engineering and physics curricula when dealing with series and parallel combination of impedances, as a part of filter design or wave propagation on transmission lines. The input impedance of such infinite ladder circuits is derived by assuming that the input impedance does…

  9. Medical Image Intensifier In 1980 (What Really Happened)

    NASA Astrophysics Data System (ADS)

    Baiter, Stephen; Kuhl, Walter

    1980-08-01

    In 1972, at the first SPIE seminar covering the application of optical instrumentation in medicine, Balter and Stanton presented a paper forecasting the status of x-ray image intensifiers in the year 1980. Now, eight years later, it is 1980, and it seems a good idea to evaluate these forecasts in the light of what has actually happened. The x-ray sensitive image intensifier tube (with cesium iodide as an input phosphor) is used nearly universally. Input screen sizes range from 15 cm to 36 cm in diameter. Real time monitoring of both fluoroscopic and fluorographic examinations is generally performed via closed circuit television. Archival recording of images is carried out using cameras with film formats of approximately 100 mm for single exposure or serial fluorography and 35 mm for cine fluorography. With the detective quantum efficiency of image intensifier tubes remaining near 50% throughout the decade, the noise content of most fluorographic and fluoroscopic images is still determined by the input exposure. Consequently, patient doses today, in 1980, have not substantially changed in the last ten years. There is, however, interest in uncoupling the x-ray dose and the image brightness by providing a variable optical diaphragm between the output of the image intensifier tube and the recording devices. During the past eight years, there has been a major philosophical change in the approach to imaging systems. It is now realized that medical image quality is much more dependent on the reduction of large area contrast losses than on the limiting resolution of the imaging system. It has also been clear that much diagnostic information is carried by spatial frequencies in the neighborhood of one line pair per millimeter (referred to the patient). The design of modern image intensifiers has been directed toward improvement in the large area contrast by minimizing x-ray and optical scatter in both the image intensifier tube and its associated components.

  10. Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.

    PubMed

    Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz

    2014-04-21

    We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.

  11. Constraints in distortion-invariant target recognition system simulation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Razzaque, Md A.

    2000-11-01

    Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.

  12. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    PubMed

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  13. Classifying magnetic resonance image modalities with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis

    2018-02-01

    Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.

  14. Pattern-Recognition Processor Using Holographic Photopolymer

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Cammack, Kevin

    2006-01-01

    proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square-law detector for the purpose of holographic recording of interference fringes. A typical state-of-the-art CCD has a pixel-pitch limited resolution of about 100 lines/mm. In contrast, the holographic photopolymer to be used in the proposed JTOC offers a resolution > 2,000 lines/mm. In addition to being disadvantageous in itself, the low resolution of the CCD causes overlap of a DC term and the desired correlation term in the output image. This overlap severely limits the correlation signal-to-noise ratio. The two-stage nature of the process limits the achievable throughput rate. A further limit is imposed by the low frame rate (typical video rates) of low- and medium-cost commercial CCDs.

  15. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  16. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  17. OPTOELECTRONICS, FIBER OPTICS, AND OTHER ASPECTS OF QUANTUM ELECTRONICS: Time analyzing image converter with a microchannel plate at the input

    NASA Astrophysics Data System (ADS)

    Dashevskiĭ, B. E.; Podvyaznikov, V. A.; Prokhorov, A. M.; Chevokin, V. K.

    1989-08-01

    An image converter with interchangeable photocathodes was used in tests on a microchannel plate employed as a photoemitter. The image converter was operated in the linear slit-scanning regime. This image converter was found to be a promising tool for laser plasma diagnostics.

  18. Measurements of striae in CR+ doped YAG laser crystals

    NASA Astrophysics Data System (ADS)

    Cady, Fredrick M.

    1994-12-01

    Striations in Czochralski (CZ) grown crystals have been observed in materials such as GaAs, silicon, photorefractive crystals used for data storage, potassium titanyl phosphate crystals and LiNbO3. Several techniques have been used for investigating these defects including electron microscopy, laser scanning tomography, selective photoetching, X-ray diffuse scattering, interference orthoscopy, laser interferometry and micro-Fourier transform infrared spectroscopy mapping. A 2mm thick sample of the material to be investigated is illuminated with light that is absorbed and non-absorbed by the ion concentration to be observed. The back surface of the sample is focused onto a solid-state image detector and images of the input beam and absorbed (and diffracted) beams are captured at two wavelengths. The variation of the coefficient of absorption asa function of distance on the sample can be derived from these measurements. A Big Sky Software Beamcode system is used to capture and display images. Software has been written to convert the Beamcode data files to a format that can be imported into a spreadsheet program such as Quatro Pro. The spreadsheet is then used to manipulate and display data. A model of the intensity map of the striae collected by the imaging system has been proposed and a data analysis procedure derived. From this, the variability of the attenuation coefficient alpha can be generated. Preliminary results show that alpha may vary by a factor of four or five over distances of 100 mu m. Potential errors and problems have been discovered and additional experiments and improvements to the experimental setup are in progress and we must now show that the measurement techniques and data analysis procedures provide 'real' information. Striae are clearly visible at all wavelengths including white light. Their basic spatial frequency does not change radically, at least when changing from blue to green to white light. Further experimental and theoretical work can be done to improve the data collection techniques and to verify the data analysis procedures.

  19. Vegetation species composition and canopy architecture information expressed in leaf water absorption measured in the 1000 nm and 2200 spectral region by an imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Roberts, Dar A.

    1995-01-01

    Plant species composition and plant architectural attributes are critical parameters required for the measuring, monitoring, and modeling of terrestrial ecosystems. Remote sensing is commonly cited as an important tool for deriving vegetation properties at an appropriate scale for ecosystem studies, ranging from local to regional and even synoptic scales. Classical approaches rely on vegetation indices such as the normalized difference vegetation index (NDVI) to estimate biophysical parameters such as leaf area index or intercepted photosynthetically active radiation (IPAR). Another approach is to apply a variety of classification schemes to map vegetation and thus extrapolate fine-scale information about specific sites to larger areas of similar composition. Imaging spectrometry provides additional information that is not obtainable through broad-band sensors and that may provide improved inputs both to direct biophysical estimates as well as classification schemes. Some of this capability has been demonstrated through improved discrimination of vegetation, estimates of canopy biochemistry, and liquid water estimates from vegetation. We investigate further the potential of leaf water absorption estimated from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data as a means for discriminating vegetation types and deriving canopy architectural information. We expand our analysis to incorporate liquid water estimates from two spectral regions, the 1000-nm region and the 2200-nm region. The study was conducted in the vicinity of Jasper Ridge, California, which is located on the San Francisco peninsula to the west of the Stanford University campus. AVIRIS data were acquired over Jasper Ridge, CA, on June 2, 1992, at 19:31 UTC. Spectra from three sites in this image were analyzed. These data are from an area of healthy grass, oak woodland, and redwood forest, respectively. For these analyses, the AVIRIS-measured upwelling radiance spectra for the entire Jasper Ridge scene were transformed to apparent surface reflectance using a radiative transfer code-based inversion algorithm.

  20. Open set recognition of aircraft in aerial imagery using synthetic template models

    NASA Astrophysics Data System (ADS)

    Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

    2017-05-01

    Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

  1. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  2. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  3. ELECTRONIC SYSTEM

    DOEpatents

    Robison, G.H. et al.

    1960-11-15

    An electronic system is described for indicating the occurrence of a plurality of electrically detectable events within predetermined time intervals. It is comprised of separate input means electrically associated with the events under observation: an electronic channel associated with each input means including control means and indicating means; timing means associated with each of the input means and the control means and adapted to derive a signal from the input means and apply it after a predetermined time to the control means to effect deactivation of each of the channels; and means for resetting the system to its initial condition after observation of each group of events.

  4. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  5. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  6. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  7. Quantum dot-based local field imaging reveals plasmon-based interferometric logic in silver nanowire networks.

    PubMed

    Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing

    2011-02-09

    We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.

  8. Image quality assessment for video stream recognition systems

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  9. Retina-Inspired Filter.

    PubMed

    Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien

    2018-07-01

    This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.

  10. SharedCanvas: A Collaborative Model for Medieval Manuscript Layout Dissemination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanderson, Robert D.; Albritton, Benjamin; Schwemmer, Rafael

    2011-01-01

    In this paper we present a model based on the principles of Linked Data that can be used to describe the interrelationships of images, texts and other resources to facilitate the interoperability of repositories of medieval manuscripts or other culturally important handwritten documents. The model is designed from a set of requirements derived from the real world use cases of some of the largest digitized medieval content holders, and instantiations of the model are intended as the input to collection-independent page turning and scholarly presentation interfaces. A canvas painting paradigm, such as in PDF and SVG, was selected based onmore » the lack of a one to one correlation between image and page, and to fulfill complex requirements such as when the full text of a page is known, but only fragments of the physical object remain. The model is implemented using technologies such as OAI-ORE Aggregations and OAC Annotations, as the fundamental building blocks of emerging Linked Digital Libraries. The model and implementation are evaluated through prototypes of both content providing and consuming applications. Although the system was designed from requirements drawn from the medieval manuscript domain, it is applicable to any layout-oriented presentation of images of text.« less

  11. Determining skeletal muscle architecture with Laplacian simulations: a comparison with diffusion tensor imaging.

    PubMed

    Handsfield, Geoffrey G; Bolsterlee, Bart; Inouye, Joshua M; Herbert, Robert D; Besier, Thor F; Fernandez, Justin W

    2017-12-01

    Determination of skeletal muscle architecture is important for accurately modeling muscle behavior. Current methods for 3D muscle architecture determination can be costly and time-consuming, making them prohibitive for clinical or modeling applications. Computational approaches such as Laplacian flow simulations can estimate muscle fascicle orientation based on muscle shape and aponeurosis location. The accuracy of this approach is unknown, however, since it has not been validated against other standards for muscle architecture determination. In this study, muscle architectures from the Laplacian approach were compared to those determined from diffusion tensor imaging in eight adult medial gastrocnemius muscles. The datasets were subdivided into training and validation sets, and computational fluid dynamics software was used to conduct Laplacian simulations. In training sets, inputs of muscle geometry, aponeurosis location, and geometric flow guides resulted in good agreement between methods. Application of the method to validation sets showed no significant differences in pennation angle (mean difference [Formula: see text] or fascicle length (mean difference 0.9 mm). Laplacian simulation was thus effective at predicting gastrocnemius muscle architectures in healthy volunteers using imaging-derived muscle shape and aponeurosis locations. This method may serve as a tool for determining muscle architecture in silico and as a complement to other approaches.

  12. Slope histogram distribution-based parametrisation of Martian geomorphic features

    NASA Astrophysics Data System (ADS)

    Balint, Zita; Székely, Balázs; Kovács, Gábor

    2014-05-01

    The application of geomorphometric methods on the large Martian digital topographic datasets paves the way to analyse the Martian areomorphic processes in more detail. One of the numerous methods is the analysis is to analyse local slope distributions. To this implementation a visualization program code was developed that allows to calculate the local slope histograms and to compare them based on Kolmogorov distance criterion. As input data we used the digital elevation models (DTMs) derived from HRSC high-resolution stereo camera image from various Martian regions. The Kolmogorov-criterion based discrimination produces classes of slope histograms that displayed using coloration obtaining an image map. In this image map the distribution can be visualized by their different colours representing the various classes. Our goal is to create a local slope histogram based classification for large Martian areas in order to obtain information about general morphological characteristics of the region. This is a contribution of the TMIS.ascrea project, financed by the Austrian Research Promotion Agency (FFG). The present research is partly realized in the frames of TÁMOP 4.2.4.A/2-11-1-2012-0001 high priority "National Excellence Program - Elaborating and Operating an Inland Student and Researcher Personal Support System convergence program" project's scholarship support, using Hungarian state and European Union funds and cofinances from the European Social Fund.

  13. Java Library for Input and Output of Image Data and Metadata

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Levoe, Steven

    2003-01-01

    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.

  14. Image quality assessment using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  15. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  16. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks.

    PubMed

    Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.

  17. Electro-Optical Imaging Fourier-Transform Spectrometer

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Zhou, Hanying

    2006-01-01

    An electro-optical (E-O) imaging Fourier-transform spectrometer (IFTS), now under development, is a prototype of improved imaging spectrometers to be used for hyperspectral imaging, especially in the infrared spectral region. Unlike both imaging and non-imaging traditional Fourier-transform spectrometers, the E-O IFTS does not contain any moving parts. Elimination of the moving parts and the associated actuator mechanisms and supporting structures would increase reliability while enabling reductions in size and mass, relative to traditional Fourier-transform spectrometers that offer equivalent capabilities. Elimination of moving parts would also eliminate the vibrations caused by the motions of those parts. Figure 1 schematically depicts a traditional Fourier-transform spectrometer, wherein a critical time delay is varied by translating one the mirrors of a Michelson interferometer. The time-dependent optical output is a periodic representation of the input spectrum. Data characterizing the input spectrum are generated through fast-Fourier-transform (FFT) post-processing of the output in conjunction with the varying time delay.

  18. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  19. Fourier-Mellin moment-based intertwining map for image encryption

    NASA Astrophysics Data System (ADS)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  20. Compact hybrid optoelectrical unit for image processing and recognition

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu

    1998-07-01

    In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.

Top