Sample records for image-derived input function

  1. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  2. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  3. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  4. Application of image-derived and venous input functions in major depression using [carbonyl-(11)C]WAY-100635.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert

    2013-04-01

    Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less

  6. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  7. Correlation of Tumor Immunohistochemistry with Dynamic Contrast-Enhanced and DSC-MRI Parameters in Patients with Gliomas.

    PubMed

    Nguyen, T B; Cron, G O; Bezzina, K; Perdrizet, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Thornhill, R E; Zanette, B; Cameron, I G

    2016-12-01

    Tumor CBV is a prognostic and predictive marker for patients with gliomas. Tumor CBV can be measured noninvasively with different MR imaging techniques; however, it is not clear which of these techniques most closely reflects histologically-measured tumor CBV. Our aim was to investigate the correlations between dynamic contrast-enhanced and DSC-MR imaging parameters and immunohistochemistry in patients with gliomas. Forty-three patients with a new diagnosis of glioma underwent a preoperative MR imaging examination with dynamic contrast-enhanced and DSC sequences. Unnormalized and normalized cerebral blood volume was obtained from DSC MR imaging. Two sets of plasma volume and volume transfer constant maps were obtained from dynamic contrast-enhanced MR imaging. Plasma volume obtained from the phase-derived vascular input function and bookend T1 mapping (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function and bookend T1 mapping (K trans _Φ) were determined. Plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K trans _SI) were acquired, without T1 mapping. Using CD34 staining, we measured microvessel density and microvessel area within 3 representative areas of the resected tumor specimen. The Mann-Whitney U test was used to test for differences according to grade and degree of enhancement. The Spearman correlation was performed to determine the relationship between dynamic contrast-enhanced and DSC parameters and histopathologic measurements. Microvessel area, microvessel density, dynamic contrast-enhanced, and DSC-MR imaging parameters varied according to the grade and degree of enhancement (P < .05). A strong correlation was found between microvessel area and Vp_Φ and between microvessel area and unnormalized blood volume (r s ≥ 0.61). A moderate correlation was found between microvessel area and normalized blood volume, microvessel area and Vp_SI, microvessel area and K trans _Φ, microvessel area and K trans _SI, microvessel density and Vp_Φ, microvessel density and unnormalized blood volume, and microvessel density and normalized blood volume (0.44 ≤ r s ≤ 0.57). A weaker correlation was found between microvessel density and K trans _Φ and between microvessel density and K trans _SI (r s ≤ 0.41). With dynamic contrast-enhanced MR imaging, use of a phase-derived vascular input function and bookend T1 mapping improves the correlation between immunohistochemistry and plasma volume, but not between immunohistochemistry and the volume transfer constant. With DSC-MR imaging, normalization of tumor CBV could decrease the correlation with microvessel area. © 2016 by American Journal of Neuroradiology.

  8. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  9. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  10. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  11. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  12. Simplifying [18F]GE-179 PET: are both arterial blood sampling and 90-min acquisitions essential?

    PubMed

    McGinnity, Colm J; Riaño Barros, Daniela A; Trigg, William; Brooks, David J; Hinz, Rainer; Duncan, John S; Koepp, Matthias J; Hammers, Alexander

    2018-06-11

    The NMDA receptor radiotracer [ 18 F]GE-179 has been used with 90-min scans and arterial plasma input functions. We explored whether (1) arterial blood sampling is avoidable and (2) shorter scans are feasible. For 20 existing [ 18 F]GE-179 datasets, we generated (1) standardised uptake values (SUVs) over eight intervals; (2) volume of distribution (V T ) images using population-based input functions (PBIFs), scaled using one parent plasma sample; and (3) V T images using three shortened datasets, using the original parent plasma input functions (ppIFs). Correlations with the original ppIF-derived 90-min V T s increased for later interval SUVs (maximal ρ = 0.78; 80-90 min). They were strong for PBIF-derived V T s (ρ = 0.90), but between-subject coefficient of variation increased. Correlations were very strong for the 60/70/80-min original ppIF-derived V T s (ρ = 0.97-1.00), which suffered regionally variant negative bias. Where arterial blood sampling is available, reduction of scan duration to 60 min is feasible, but with negative bias. The performance of SUVs was more consistent across participants than PBIF-derived V T s.

  13. Population-based input function and image-derived input function for [¹¹C](R)-rolipram PET imaging: methodology, validation and application to the study of major depressive disorder.

    PubMed

    Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2012-11-15

    Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.

  14. Image-derived arterial input function for quantitative fluorescence imaging of receptor-drug binding in vivo

    PubMed Central

    Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2017-01-01

    Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671

  15. Quantification of regional myocardial blood flow estimation with three-dimensional dynamic rubidium-82 PET and modified spillover correction model.

    PubMed

    Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara

    2012-08-01

    Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.

  16. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  17. Automated method for relating regional pulmonary structure and function: integration of dynamic multislice CT and thin-slice high-resolution CT

    NASA Astrophysics Data System (ADS)

    Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.

    1993-07-01

    We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.

  18. Cerebral blood flow with [15O]water PET studies using an image-derived input function and MR-defined carotid centerlines

    NASA Astrophysics Data System (ADS)

    Fung, Edward K.; Carson, Richard E.

    2013-03-01

    Full quantitative analysis of brain PET data requires knowledge of the arterial input function into the brain. Such data are normally acquired by arterial sampling with corrections for delay and dispersion to account for the distant sampling site. Several attempts have been made to extract an image-derived input function (IDIF) directly from the internal carotid arteries that supply the brain and are often visible in brain PET images. We have devised a method of delineating the internal carotids in co-registered magnetic resonance (MR) images using the level-set method and applying the segmentations to PET images using a novel centerline approach. Centerlines of the segmented carotids were modeled as cubic splines and re-registered in PET images summed over the early portion of the scan. Using information from the anatomical center of the vessel should minimize partial volume and spillover effects. Centerline time-activity curves were taken as the mean of the values for points along the centerline interpolated from neighboring voxels. A scale factor correction was derived from calculation of cerebral blood flow (CBF) using gold standard arterial blood measurements. We have applied the method to human subject data from multiple injections of [15O]water on the HRRT. The method was assessed by calculating the area under the curve (AUC) of the IDIF and the CBF, and comparing these to values computed using the gold standard arterial input curve. The average ratio of IDIF to arterial AUC (apparent recovery coefficient: aRC) across 9 subjects with multiple (n = 69) injections was 0.49 ± 0.09 at 0-30 s post tracer arrival, 0.45 ± 0.09 at 30-60 s, and 0.46 ± 0.09 at 60-90 s. Gray and white matter CBF values were 61.4 ± 11.0 and 15.6 ± 3.0 mL/min/100 g tissue using sampled blood data. Using IDIF centerlines scaled by the average aRC over each subjects’ injections, gray and white matter CBF values were 61.3 ± 13.5 and 15.5 ± 3.4 mL/min/100 g tissue. Using global average aRC values, the means were unchanged, and intersubject variability was noticeably reduced. This MR-based centerline method with local re-registration to [15O]water PET yields a consistent IDIF over multiple injections in the same subject, thus permitting the absolute quantification of CBF without arterial input function measurements.

  19. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  20. Towards quantitative [18F]FDG-PET/MRI of the brain: Automated MR-driven calculation of an image-derived input function for the non-invasive determination of cerebral glucose metabolic rates.

    PubMed

    Sundar, Lalith Ks; Muzik, Otto; Rischka, Lucas; Hahn, Andreas; Rausch, Ivo; Lanzenberger, Rupert; Hienert, Marius; Klebermass, Eva-Maria; Füchsel, Frank-Günther; Hacker, Marcus; Pilz, Magdalena; Pataraia, Ekaterina; Traub-Weidinger, Tatjana; Beyer, Thomas

    2018-01-01

    Absolute quantification of PET brain imaging requires the measurement of an arterial input function (AIF), typically obtained invasively via an arterial cannulation. We present an approach to automatically calculate an image-derived input function (IDIF) and cerebral metabolic rates of glucose (CMRGlc) from the [18F]FDG PET data using an integrated PET/MRI system. Ten healthy controls underwent test-retest dynamic [18F]FDG-PET/MRI examinations. The imaging protocol consisted of a 60-min PET list-mode acquisition together with a time-of-flight MR angiography scan for segmenting the carotid arteries and intermittent MR navigators to monitor subject movement. AIFs were collected as the reference standard. Attenuation correction was performed using a separate low-dose CT scan. Assessment of the percentage difference between area-under-the-curve of IDIF and AIF yielded values within ±5%. Similar test-retest variability was seen between AIFs (9 ± 8) % and the IDIFs (9 ± 7) %. Absolute percentage difference between CMRGlc values obtained from AIF and IDIF across all examinations and selected brain regions was 3.2% (interquartile range: (2.4-4.3) %, maximum < 10%). High test-retest intravariability was observed between CMRGlc values obtained from AIF (14%) and IDIF (17%). The proposed approach provides an IDIF, which can be effectively used in lieu of AIF.

  1. Test-Retest Repeatability of Myocardial Blood Flow Measurements using Rubidium-82 Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Efseaff, Matthew

    Rubidium-82 positron emission tomography (PET) imaging has been proposed for routine myocardial blood flow (MBF) quantification. Few studies have investigated the test-retest repeatability of this method. Same-day repeatability of rest MBF imaging was optimized with a highly automated analysis program using image-derived input functions and a dual spillover correction (SOC). The effects of heterogeneous tracer infusion profiles and subject hemodynamics on test-retest repeatability were investigated at rest and during hyperemic stress. Factors affecting rest MBF repeatability included gender, suspected coronary artery disease, and dual SOC (p < 0.001). The best repeatability coefficient for same-day rest MBF was 0.20 mL/min/g using a six-minute scan-time, iterative reconstruction, dual SOC, resting rate-pressure-product (RPP) adjustment, and a left atrium image-derived input function. The serial study repeatabilities of the optimized protocol in subjects with homogeneous RPPs and tracer infusion profiles was 0.19 and 0.53 mL/min/g at rest and stress, and 0.95 for stress / rest myocardial flow reserve (MFR). Subjects with heterogeneous tracer infusion profiles and hemodynamic conditions had significantly less repeatable MBF measurements at rest, stress, and stress/rest flow reserve (p < 0.05).

  2. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  3. Quantification of 18F-fluorocholine kinetics in patients with prostate cancer.

    PubMed

    Verwer, Eline E; Oprea-Lager, Daniela E; van den Eertwegh, Alfons J M; van Moorselaar, Reindert J A; Windhorst, Albert D; Schwarte, Lothar A; Hendrikse, N Harry; Schuit, Robert C; Hoekstra, Otto S; Lammertsma, Adriaan A; Boellaard, Ronald

    2015-03-01

    Choline kinase is upregulated in prostate cancer, resulting in increased (18)F-fluoromethylcholine uptake. This study used pharmacokinetic modeling to validate the use of simplified methods for quantification of (18)F-fluoromethylcholine uptake in a routine clinical setting. Forty-minute dynamic PET/CT scans were acquired after injection of 204 ± 9 MBq of (18)F-fluoromethylcholine, from 8 patients with histologically proven metastasized prostate cancer. Plasma input functions were obtained using continuous arterial blood-sampling as well as using image-derived methods. Manual arterial blood samples were used for calibration and correction for plasma-to-blood ratio and metabolites. Time-activity curves were derived from volumes of interest in all visually detectable lymph node metastases. (18)F-fluoromethylcholine kinetics were studied by nonlinear regression fitting of several single- and 2-tissue plasma input models to the time-activity curves. Model selection was based on the Akaike information criterion and measures of robustness. In addition, the performance of several simplified methods, such as standardized uptake value (SUV), was assessed. Best fits were obtained using an irreversible compartment model with blood volume parameter. Parent fractions were 0.12 ± 0.4 after 20 min, necessitating individual metabolite corrections. Correspondence between venous and arterial parent fractions was low as determined by the intraclass correlation coefficient (0.61). Results for image-derived input functions that were obtained from volumes of interest in blood-pool structures distant from tissues of high (18)F-fluoromethylcholine uptake yielded good correlation to those for the blood-sampling input functions (R(2) = 0.83). SUV showed poor correlation to parameters derived from full quantitative kinetic analysis (R(2) < 0.34). In contrast, lesion activity concentration normalized to the integral of the blood activity concentration over time (SUVAUC) showed good correlation (R(2) = 0.92 for metabolite-corrected plasma; 0.65 for whole-blood activity concentrations). SUV cannot be used to quantify (18)F-fluoromethylcholine uptake. A clinical compromise could be SUVAUC derived from 2 consecutive static PET scans, one centered on a large blood-pool structure during 0-30 min after injection to obtain the blood activity concentrations and the other a whole-body scan at 30 min after injection to obtain lymph node activity concentrations. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. Spectroscopic analysis and in vitro imaging applications of a pH responsive AIE sensor with a two-input inhibit function.

    PubMed

    Zhou, Zhan; Gu, Fenglong; Peng, Liang; Hu, Ying; Wang, Qianming

    2015-08-04

    A novel terpyridine derivative formed stable aggregates in aqueous media (DMSO/H2O = 1/99) with dramatically enhanced fluorescence compared to its organic solution. Moreover, the ultra-violet absorption spectra also demonstrated specific responses to the incorporation of water. The yellow emission at 557 nm changed to a solution with intense greenish luminescence only in the presence of protons and it conformed to a molecular logic gate with a two-input INHIBIT function. This molecular-based material could permeate into live cells and remain undissociated in the cytoplasm. The new aggregation induced emission (AIE) pH type bio-probe permitted easy collection of yellow luminescence images on a fluorescent microscope. As designed, it displayed striking green emission in organelles at low internal pH. This feature enabled the self-assembled structure to have a whole new function for the pH detection within the field of cell imaging.

  5. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  6. Combining image-derived and venous input functions enables quantification of serotonin-1A receptors with [carbonyl-11C]WAY-100635 independent of arterial sampling.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert

    2012-08-01

    image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Kinetic quantitation of cerebral PET-FDG studies without concurrent blood sampling: statistical recovery of the arterial input function.

    PubMed

    O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A

    2010-03-01

    Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.

  8. Noninvasive image derived heart input function for CMRglc measurements in small animal slow infusion FDG PET studies

    NASA Astrophysics Data System (ADS)

    Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido

    2012-12-01

    Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.

  9. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    PubMed

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018 Institute of Physics and Engineering in Medicine.

  10. Combining MRI With PET for Partial Volume Correction Improves Image-Derived Input Functions in Mice

    NASA Astrophysics Data System (ADS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C.; Ansorge, Richard E.; Krieg, Thomas; Carpenter, T. Adrian; Sawiak, Stephen J.

    2015-06-01

    Accurate kinetic modelling using dynamic PET requires knowledge of the tracer concentration in plasma, known as the arterial input function (AIF). AIFs are usually determined by invasive blood sampling, but this is prohibitive in murine studies due to low total blood volumes. As a result of the low spatial resolution of PET, image-derived input functions (IDIFs) must be extracted from left ventricular blood pool (LVBP) ROIs of the mouse heart. This is challenging because of partial volume and spillover effects between the LVBP and myocardium, contaminating IDIFs with tissue signal. We have applied the geometric transfer matrix (GTM) method of partial volume correction (PVC) to 12 mice injected with 18F - FDG affected by a Myocardial Infarction (MI), of which 6 were treated with a drug which reduced infarction size [1]. We utilised high resolution MRI to assist in segmenting mouse hearts into 5 classes: LVBP, infarcted myocardium, healthy myocardium, lungs/body and background. The signal contribution from these 5 classes was convolved with the point spread function (PSF) of the Cambridge split magnet PET scanner and a non-linear fit was performed on the 5 measured signal components. The corrected IDIF was taken as the fitted LVBP component. It was found that the GTM PVC method could recover an IDIF with less contamination from spillover than an IDIF extracted from PET data alone. More realistic values of Ki were achieved using GTM IDIFs, which were shown to be significantly different (p <; 0.05) between the treated and untreated groups.

  11. Evaluation of limited blood sampling population input approaches for kinetic quantification of [18F]fluorothymidine PET data.

    PubMed

    Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula

    2012-03-24

    Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.

  12. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  13. Positron emission tomography/magnetic resonance hybrid scanner imaging of cerebral blood flow using 15O-water positron emission tomography and arterial spin labeling magnetic resonance imaging in newborn piglets

    PubMed Central

    Andersen, Julie B; Henning, William S; Lindberg, Ulrich; Ladefoged, Claes N; Højgaard, Liselotte; Greisen, Gorm; Law, Ian

    2015-01-01

    Abnormality in cerebral blood flow (CBF) distribution can lead to hypoxic–ischemic cerebral damage in newborn infants. The aim of the study was to investigate minimally invasive approaches to measure CBF by comparing simultaneous 15O-water positron emission tomography (PET) and single TI pulsed arterial spin labeling (ASL) magnetic resonance imaging (MR) on a hybrid PET/MR in seven newborn piglets. Positron emission tomography was performed with IV injections of 20 MBq and 100 MBq 15O-water to confirm CBF reliability at low activity. Cerebral blood flow was quantified using a one-tissue-compartment-model using two input functions: an arterial input function (AIF) or an image-derived input function (IDIF). The mean global CBF (95% CI) PET-AIF, PET-IDIF, and ASL at baseline were 27 (23; 32), 34 (31; 37), and 27 (22; 32) mL/100 g per minute, respectively. At acetazolamide stimulus, PET-AIF, PET-IDIF, and ASL were 64 (55; 74), 76 (70; 83) and 79 (67; 92) mL/100 g per minute, respectively. At baseline, differences between PET-AIF, PET-IDIF, and ASL were 22% (P<0.0001) and −0.7% (P=0.9). At acetazolamide, differences between PET-AIF, PET-IDIF, and ASL were 19% (P=0.001) and 24% (P=0.0003). In conclusion, PET-IDIF overestimated CBF. Injected activity of 20 MBq 15O-water had acceptable concordance with 100 MBq, without compromising image quality. Single TI ASL was questionable for regional CBF measurements. Global ASL CBF and PET CBF were congruent during baseline but not during hyperperfusion. PMID:26058699

  14. Relationship between fatigue of generation II image intensifier and input illumination

    NASA Astrophysics Data System (ADS)

    Chen, Qingyou

    1995-09-01

    If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.

  15. Noninvasive PK11195-PET Image Analysis Techniques Can Detect Abnormal Cerebral Microglial Activation in Parkinson's Disease.

    PubMed

    Kang, Yeona; Mozley, P David; Verma, Ajay; Schlyer, David; Henchcliffe, Claire; Gauthier, Susan A; Chiao, Ping C; He, Bin; Nikolopoulou, Anastasia; Logan, Jean; Sullivan, Jenna M; Pryor, Kane O; Hesterman, Jacob; Kothari, Paresh J; Vallabhajosula, Shankar

    2018-05-04

    Neuroinflammation has been implicated in the pathophysiology of Parkinson's disease (PD), which might be influenced by successful neuroprotective drugs. The uptake of [ 11 C](R)-PK11195 (PK) is often considered to be a proxy for neuroinflammation, and can be quantified using the Logan graphical method with an image-derived blood input function, or the Logan reference tissue model using automated reference region extraction. The purposes of this study were (1) to assess whether these noninvasive image analysis methods can discriminate between patients with PD and healthy volunteers (HVs), and (2) to establish the effect size that would be required to distinguish true drug-induced changes from system variance in longitudinal trials. The sample consisted of 20 participants with PD and 19 HVs. Two independent teams analyzed the data to compare the volume of distribution calculated using image-derived input functions (IDIFs), and binding potentials calculated using the Logan reference region model. With all methods, the higher signal-to-background in patients resulted in lower variability and better repeatability than in controls. We were able to use noninvasive techniques showing significantly increased uptake of PK in multiple brain regions of participants with PD compared to HVs. Although not necessarily reflecting absolute values, these noninvasive image analysis methods can discriminate between PD patients and HVs. We see a difference of 24% in the substantia nigra between PD and HV with a repeatability coefficient of 13%, showing that it will be possible to estimate responses in longitudinal, within subject trials of novel neuroprotective drugs. © 2018 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.

  16. SU-G-IeP3-11: On the Utility of Pixel Variance to Characterize Noise for Image Receptors of Digital Radiography Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, C; Dave, J

    Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less

  17. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  18. Demonstration of the reproducibility of free-breathing diffusion-weighted MRI and dynamic contrast enhanced MRI in children with solid tumours: a pilot study.

    PubMed

    Miyazaki, Keiko; Jerome, Neil P; Collins, David J; Orton, Matthew R; d'Arcy, James A; Wallace, Toni; Moreno, Lucas; Pearson, Andrew D J; Marshall, Lynley V; Carceller, Fernando; Leach, Martin O; Zacharoulis, Stergios; Koh, Dow-Mu

    2015-09-01

    The objectives are to examine the reproducibility of functional MR imaging in children with solid tumours using quantitative parameters derived from diffusion-weighted (DW-) and dynamic contrast enhanced (DCE-) MRI. Patients under 16-years-of age with confirmed diagnosis of solid tumours (n = 17) underwent free-breathing DW-MRI and DCE-MRI on a 1.5 T system, repeated 24 hours later. DW-MRI (6 b-values, 0-1000 sec/mm(2)) enabled monoexponential apparent diffusion coefficient estimation using all (ADC0-1000) and only ≥100 sec/mm(2) (ADC100-1000) b-values. DCE-MRI was used to derive the transfer constant (K(trans)), the efflux constant (kep), the extracellular extravascular volume (ve), and the plasma fraction (vp), using a study cohort arterial input function (AIF) and the extended Tofts model. Initial area under the gadolinium enhancement curve and pre-contrast T1 were also calculated. Percentage coefficients of variation (CV) of all parameters were calculated. The most reproducible cohort parameters were ADC100-1000 (CV = 3.26%), pre-contrast T1 (CV = 6.21%), and K(trans) (CV = 15.23%). The ADC100-1000 was more reproducible than ADC0-1000, especially extracranially (CV = 2.40% vs. 2.78%). The AIF (n = 9) derived from this paediatric population exhibited sharper and earlier first-pass and recirculation peaks compared with the literature's adult population average. Free-breathing functional imaging protocols including DW-MRI and DCE-MRI are well-tolerated in children aged 6 - 15 with good to moderate measurement reproducibility. • Diffusion MRI protocol is feasible and well-tolerated in a paediatric oncology population. • DCE-MRI for pharmacokinetic evaluation is feasible and well tolerated in a paediatric oncology population. • Paediatric arterial input function (AIF) shows systematic differences from the adult population-average AIF. • Variation of quantitative parameters from paired functional MRI measurements were within 20%.

  19. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  20. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  1. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  2. Roles of Fog and Topography in Redwood Forest Hydrology

    NASA Astrophysics Data System (ADS)

    Francis, E. J.; Asner, G. P.

    2017-12-01

    Spatial variability of water in forests is a function of both climatic gradients that control water inputs and topo-edaphic variation that determines the flows of water belowground, as well as interactions of climate with topography. Coastal redwood forests are hydrologically unique because they are influenced by coastal low clouds, or fog, that is advected onto land by a strong coastal-to-inland temperature difference. Where fog intersects the land surface, annual water inputs from summer fog drip can be greater than that of winter rainfall. In this study, we take advantage of mapped spatial gradients in forest canopy water storage, topography, and fog cover in California to better understand the roles and interactions of fog and topography in the hydrology of redwood forests. We test a conceptual model of redwood forest hydrology with measurements of canopy water content derived from high-resolution airborne imaging spectroscopy, topographic variables derived from high-resolution LiDAR data, and fog cover maps derived from NASA MODIS data. Landscape-level results provide insight into hydrological processes within redwood forests, and cross-site analyses shed light on their generality.

  3. Classification of Land Cover and Land Use Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Rottensteiner, Franz; Heipke, Christian

    2018-04-01

    Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.

  4. Detection and quantification of large-vessel inflammation with 11C-(R)-PK11195 PET/CT.

    PubMed

    Lamare, Frederic; Hinz, Rainer; Gaemperli, Oliver; Pugliese, Francesca; Mason, Justin C; Spinks, Terence; Camici, Paolo G; Rimoldi, Ornella E

    2011-01-01

    We investigated whether PET/CT angiography using 11C-(R)-PK11195, a selective ligand for the translocator protein (18 kDa) expressed in activated macrophages, could allow imaging and quantification of arterial wall inflammation in patients with large-vessel vasculitis. Seven patients with systemic inflammatory disorders (3 symptomatic patients with clinical suspicion of active vasculitis and 4 asymptomatic patients) underwent PET with 11C-(R)-PK11195 and CT angiography to colocalize arterial wall uptake of 11C-(R)-PK11195. Tissue regions of interest were defined in bone marrow, lung parenchyma, wall of the ascending aorta, aortic arch, and descending aorta. Blood-derived and image-derived input functions (IFs) were generated. A reversible 1-tissue compartment with 2 kinetic rate constants and a fractional blood volume term were used to fit the time-activity curves to calculate total volume of distribution (VT). The correlation between VT and standardized uptake values was assessed. VT was significantly higher in symptomatic than in asymptomatic patients using both image-derived total plasma IF (0.55±0.15 vs. 0.27±0.12, P=0.009) and image-derived parent plasma IF (1.40±0.50 vs. 0.58±0.25, P=0.018). A good correlation was observed between VT and standardized uptake value (R=0.79; P=0.03). 11C-(R)-PK11195 imaging allows visualization of macrophage infiltration in inflamed arterial walls. Tracer uptake can be quantified with image-derived IF without the need for metabolite corrections and evaluated semiquantitatively with standardized uptake values.

  5. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  6. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E

    2005-06-21

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.

  7. Measured Polarized Spectral Responsivity of JPSS J1 VIIRS Using the NIST T-SIRCUS

    NASA Technical Reports Server (NTRS)

    McIntire, Jeff; Young, James B.; Moyer, David; Waluschka, Eugene; Xiong, Xiaoxiong

    2015-01-01

    Recent pre-launch measurements performed on the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) using the National Institute of Standards and Technology (NIST) Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources (T-SIRCUS) monochromatic source have provided wavelength dependent polarization sensitivity for select spectral bands and viewing conditions. Measurements were made at a number of input linear polarization states (twelve in total) and initially at thirteen wavelengths across the bandpass (later expanded to seventeen for some cases). Using the source radiance information collected by an external monitor, a spectral responsivity function was constructed for each input linear polarization state. Additionally, an unpolarized spectral responsivity function was derived from these polarized measurements. An investigation of how the centroid, bandwidth, and detector responsivity vary with polarization state was weighted by two model input spectra to simulate both ground measurements as well as expected on-orbit conditions. These measurements will enhance our understanding of VIIRS polarization sensitivity, improve the design for future flight models, and provide valuable data to enhance product quality in the post-launch phase.

  8. Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierwirth, P.N.; Lee, T.J.; Burne, R.V.

    1993-03-01

    A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less

  9. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types

    NASA Astrophysics Data System (ADS)

    Hilbert, K.; Lewis, D.; O'Hara, C. G.

    2006-12-01

    The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.

  11. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  12. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  13. Iterative pixelwise approach applied to computer-generated holograms and diffractive optical elements.

    PubMed

    Hsu, Wei-Feng; Lin, Shih-Chih

    2018-01-01

    This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.

  14. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  15. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  17. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  18. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  19. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  20. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  1. Automatic recognition of ship types from infrared images using superstructure moment invariants

    NASA Astrophysics Data System (ADS)

    Li, Heng; Wang, Xinyu

    2007-11-01

    Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.

  2. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  3. State-space estimation of the input stimulus function using the Kalman filter: a communication system model for fMRI experiments.

    PubMed

    Ward, B Douglas; Mazaheri, Yousef

    2006-12-15

    The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.

  4. A comparison of individual and population-derived vascular input functions for quantitative DCE-MRI in rats.

    PubMed

    Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E

    2014-05-01

    Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Synaptic inputs from stroke-injured brain to grafted human stem cell-derived neurons activated by sensory stimuli.

    PubMed

    Tornero, Daniel; Tsupykov, Oleg; Granmo, Marcus; Rodriguez, Cristina; Grønning-Hansen, Marita; Thelin, Jonas; Smozhanik, Ekaterina; Laterza, Cecilia; Wattananit, Somsak; Ge, Ruimin; Tatarishvili, Jemal; Grealish, Shane; Brüstle, Oliver; Skibo, Galina; Parmar, Malin; Schouenborg, Jens; Lindvall, Olle; Kokaia, Zaal

    2017-03-01

    Transplanted neurons derived from stem cells have been proposed to improve function in animal models of human disease by various mechanisms such as neuronal replacement. However, whether the grafted neurons receive functional synaptic inputs from the recipient's brain and integrate into host neural circuitry is unknown. Here we studied the synaptic inputs from the host brain to grafted cortical neurons derived from human induced pluripotent stem cells after transplantation into stroke-injured rat cerebral cortex. Using the rabies virus-based trans-synaptic tracing method and immunoelectron microscopy, we demonstrate that the grafted neurons receive direct synaptic inputs from neurons in different host brain areas located in a pattern similar to that of neurons projecting to the corresponding endogenous cortical neurons in the intact brain. Electrophysiological in vivo recordings from the cortical implants show that physiological sensory stimuli, i.e. cutaneous stimulation of nose and paw, can activate or inhibit spontaneous activity in grafted neurons, indicating that at least some of the afferent inputs are functional. In agreement, we find using patch-clamp recordings that a portion of grafted neurons respond to photostimulation of virally transfected, channelrhodopsin-2-expressing thalamo-cortical axons in acute brain slices. The present study demonstrates, for the first time, that the host brain regulates the activity of grafted neurons, providing strong evidence that transplanted human induced pluripotent stem cell-derived cortical neurons can become incorporated into injured cortical circuitry. Our findings support the idea that these neurons could contribute to functional recovery in stroke and other conditions causing neuronal loss in cerebral cortex. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Image-derived input function in PET brain studies: blood-based methods are resistant to motion artifacts.

    PubMed

    Zanotti-Fregonara, Paolo; Liow, Jeih-San; Comtat, Claude; Zoghbi, Sami S; Zhang, Yi; Pike, Victor W; Fujita, Masahiro; Innis, Robert B

    2012-09-01

    Image-derived input function (IDIF) from carotid arteries is an elegant alternative to full arterial blood sampling for brain PET studies. However, a recent study using blood-free IDIFs found that this method is particularly vulnerable to patient motion. The present study used both simulated and clinical [11C](R)-rolipram data to assess the robustness of a blood-based IDIF method (a method that is ultimately normalized with blood samples) with regard to motion artifacts. The impact of motion on the accuracy of IDIF was first assessed with an analytical simulation of a high-resolution research tomograph using a numerical phantom of the human brain, equipped with internal carotids. Different degrees of translational (from 1 to 20 mm) and rotational (from 1 to 15°) motions were tested. The impact of motion was then tested on the high-resolution research tomograph dynamic scans of three healthy volunteers, reconstructed with and without an online motion correction system. IDIFs and Logan-distribution volume (VT) values derived from simulated and clinical scans with motion were compared with those obtained from the scans with motion correction. In the phantom scans, the difference in the area under the curve (AUC) for the carotid time-activity curves was up to 19% for rotations and up to 66% for translations compared with the motionless simulation. However, for the final IDIFs, which were fitted to blood samples, the AUC difference was 11% for rotations and 8% for translations. Logan-VT errors were always less than 10%, except for the maximum translation of 20 mm, in which the error was 18%. Errors in the clinical scans without motion correction appeared to be minor, with differences in AUC and Logan-VT always less than 10% compared with scans with motion correction. When a blood-based IDIF method is used for neurological PET studies, the motion of the patient affects IDIF estimation and kinetic modeling only minimally.

  7. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.

  8. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  9. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  10. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  11. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  12. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays.

    PubMed

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-03-15

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target's point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment.

  13. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays

    PubMed Central

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-01-01

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target’s point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment. PMID:28294996

  14. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  15. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    PubMed

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  16. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  17. Distinguishing plant population and variety with UAV-derived vegetation indices

    NASA Astrophysics Data System (ADS)

    Oakes, Joseph; Balota, Maria

    2017-05-01

    Variety selection and seeding rate are two important choice that a peanut grower must make. High yielding varieties can increase profit with no additional input costs, while seeding rate often determines input cost a grower will incur from seed costs. The overall purpose of this study was to examine the effect that seeding rate has on different peanut varieties. With the advent of new UAV technology, we now have the possibility to use indices collected with the UAV to measure emergence, seeding rate, growth rate, and perhaps make yield predictions. This information could enable growers to make management decisions early in the season based on low plant populations due to poor emergence, and could be a useful tool for growers to use to estimate plant population and growth rate in order to help achieve desired crop stands. Red-Green-Blue (RGB) and near-infrared (NIR) images were collected from a UAV platform starting two weeks after planting and continued weekly for the next six weeks. Ground NDVI was also collected each time aerial images were collected. Vegetation indices were derived from both the RGB and NIR images. Greener area (GGA- the proportion of green pixels with a hue angle from 80° to 120°) and a* (the average red/green color of the image) were derived from the RGB images while Normalized Differential Vegetative Index (NDVI) was derived from NIR images. Aerial indices were successful in distinguishing seeding rates and determining emergence during the first few weeks after planting, but not later in the season. Meanwhile, these aerial indices are not an adequate predictor of yield in peanut at this point.

  18. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

  19. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  20. Quickbird Satellite in-orbit Modulation Transfer Function (MTF) Measurement Using Edge, Pulse and Impulse Methods for Summer 2003

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Choi, Taeyoung; Rangaswamy, Manjunath

    2005-01-01

    The spatial characteristics of an imaging system cannot be expressed by a single number or simple statement. However, the Modulation Transfer Function (MTF) is one approach to measure the spatial quality of an imaging system. Basically, MTF is the normalized spatial frequency response of an imaging system. The frequency response of the system can be evaluated by applying an impulse input. The resulting impulse response is termed the Point Spread function (PSF). This function is a measure of the amount of blurring present in the imaging system and is itself a useful measure of spatial quality. An underlying assumption is that the imaging system is linear and shift-independent. The Fourier transform of the PSF is called the Optical Transfer Function (OTF) and the normalized magnitude of the OTF is the MTF. In addition to using an impulse input, a knife-edge in technique has also been used in this project. The sharp edge exercises an imaging system at all spatial frequencies. The profile of an edge response from an imaging system is called an Edge Spread Function (ESF). Differentiation of the ESF results in a one-dimensional version of the Point Spread Function (PSF). Finally, MTF can be calculated through use of Fourier transform of the PSF as stated previously. Every image includes noise in some degree which makes MTF of PSF estimation more difficult. To avoid the noise effects, many MTF estimation approaches use smooth numerical models. Historically, Gaussian models and Fermi functions were applied to reduce the random noise in the output profiles. The pulse-input method was used to measure the MTF of the Landsat Thematic Mapper (TM) using 8th order even functions over the San Mateo Bridge in San Francisco, California. Because the bridge width was smaller than the 30-meter ground sample distance (GSD) of the TM, the Nyquist frequency was located before the first zero-crossing point of the sinc function from the Fourier transformation of the bridge pulse. To avoid the zero-crossing points in the frequency domain from a pulse, the pulse width should be less than the width of two pixels (or 2 GSD's), but the short extent of the pulse results in a poor signal-to-noise ratio. Similarly, for a high-resolution satellite imaging system such as Quickbird, the input pulse width was critical because of the zero crossing points and noise present in the background area. It is important, therefore, that the width of the input pulse be appropriately sized. Finally, the MTF was calculated by taking ratio between Fourier transform of output and Fourier transform of input. Regardless of whether the edge, pulse and impulse target method is used, the orientation of the targets is critical in order to obtain uniformly spaced sub-pixel data points. When the orientation is incorrect, sample data points tend to be located in clusters that result in poor reconstruction of the edge or pulse profiles. Thus, a compromise orientation must be selected so that all spectral bands can be accommodated. This report continues by outlining the objectives in Section 2, procedures followed in Section 3, descriptions of the field campaigns in Section 4, results in Section 5, and a brief summary in Section 6.

  1. Energy-Containing Length Scale at the Base of a Coronal Hole: New Observational Findings

    NASA Astrophysics Data System (ADS)

    Abramenko, V.; Dosch, A.; Zank, G. P.; Yurchyshyn, V.; Goode, P. R.

    2012-12-01

    Dynamics of the photospheric flux tubes is thought to be a key factor for generation and propagation of MHD waves and magnetic stress into the corona. Recently, New Solar Telescope (NST, Big Bear Solar Observatory) imaging observations in helium I 10830 Å revealed ultrafine, hot magnetic loops reaching from the photosphere to the corona and originating from intense, compact magnetic field elements. One of the essential input parameters to run the models of the fast solar wind is a characteristic energy-containing length scale, lambda, of the dynamical structures transverse to the mean magnetic field in a coronal hole (CH) in the base of the corona. We used NST time series of solar granulation motions to estimate the velocity fluctuations, as well as NST near-infrared magnetograms to derive the magnetic field fluctuations. The NST adaptive optics corrected speckle-reconstructed images of 10 seconds cadence were an input for the local correlation tracking (LCT) code to derive the squared transverse velocity patterns. We found that the characteristic length scale for the energy-carrying structures in the photosphere is about 300 km, which is two orders of magnitude lower than it was adopted in previous models. The influence of the result on the coronal heating and fast solar wind modeling will be discussed.; Correlation functions calculated from the squared velocities for the three data sets: a coronal hole, quiet sun and active region plage area.

  2. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  3. High-cut characteristics of the baroreflex neural arc preserve baroreflex gain against pulsatile pressure.

    PubMed

    Kawada, Toru; Zheng, Can; Yanagiya, Yusuke; Uemura, Kazunori; Miyamoto, Tadayoshi; Inagaki, Masashi; Shishido, Toshiaki; Sugimachi, Masaru; Sunagawa, Kenji

    2002-03-01

    A transfer function from baroreceptor pressure input to sympathetic nerve activity (SNA) shows derivative characteristics in the frequency range below 0.8 Hz in rabbits. These derivative characteristics contribute to a quick and stable arterial pressure (AP) regulation. However, if the derivative characteristics hold up to heart rate frequency, the pulsatile pressure input will yield a markedly augmented SNA signal. Such a signal would saturate the baroreflex signal transduction, thereby disabling the baroreflex regulation of AP. We hypothesized that the transfer gain at heart rate frequency would be much smaller than that predicted from extrapolating the derivative characteristics. In anesthetized rabbits (n = 6), we estimated the neural arc transfer function in the frequency range up to 10 Hz. The transfer gain was lost at a rate of -20 dB/decade when the input frequency exceeded 0.8 Hz. A numerical simulation indicated that the high-cut characteristics above 0.8 Hz were effective to attenuate the pulsatile signal and preserve the open-loop gain when the baroreflex dynamic range was finite.

  4. Multifunction Imaging and Spectroscopic Instrument

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis

    2004-01-01

    A proposed optoelectronic instrument would perform several different spectroscopic and imaging functions that, heretofore, have been performed by separate instruments. The functions would be reflectance, fluorescence, and Raman spectroscopies; variable-color confocal imaging at two different resolutions; and wide-field color imaging. The instrument was conceived for use in examination of minerals on remote planets. It could also be used on Earth to characterize material specimens. The conceptual design of the instrument emphasizes compactness and economy, to be achieved largely through sharing of components among subsystems that perform different imaging and spectrometric functions. The input optics for the various functions would be mounted in a single optical head. With the exception of a targeting lens, the input optics would all be aimed at the same spot on a specimen, thereby both (1) eliminating the need to reposition the specimen to perform different imaging and/or spectroscopic observations and (2) ensuring that data from such observations can be correlated with respect to known positions on the specimen. The figure schematically depicts the principal components and subsystems of the instrument. The targeting lens would collect light into a multimode optical fiber, which would guide the light through a fiber-selection switch to a reflection/ fluorescence spectrometer. The switch would have four positions, enabling selection of spectrometer input from the targeting lens, from either of one or two multimode optical fibers coming from a reflectance/fluorescence- microspectrometer optical head, or from a dark calibration position (no fiber). The switch would be the only moving part within the instrument.

  5. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    NASA Astrophysics Data System (ADS)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.

  6. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  7. GLACiAR, an Open-Source Python Tool for Simulations of Source Recovery and Completeness in Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.

    2018-06-01

    The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.

  8. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  9. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  10. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  11. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  12. Mid-space-independent deformable image registration.

    PubMed

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-05-15

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Mid-Space-Independent Deformable Image Registration

    PubMed Central

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-01-01

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric – that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. PMID:28242316

  14. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  16. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  17. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  18. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  19. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  20. Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.

    PubMed

    Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi

    2016-07-01

    Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.

    2004-01-01

    Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.

  3. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    PubMed

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    NASA Astrophysics Data System (ADS)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  5. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  6. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  7. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  8. Functional transformations of odor inputs in the mouse olfactory bulb.

    PubMed

    Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi

    2014-01-01

    Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.

  9. Lower bound for LCD image quality

    NASA Astrophysics Data System (ADS)

    Olson, William P.; Balram, Nikhil

    1996-03-01

    The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.

  10. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    PubMed

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  11. Automated Movement Correction for Dynamic PET/CT Images: Evaluation with Phantom and Patient Data

    PubMed Central

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R.; Nelson, Linda D.; Small, Gary W.; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers. PMID:25111700

  12. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  13. Automated Surgical Approach Planning for Complex Skull Base Targets: Development and Validation of a Cost Function and Semantic At-las.

    PubMed

    Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A

    2018-06-01

    Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.

  14. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  15. Determination of mango fruit from binary image using randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba

    2015-12-01

    A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves.

  16. Cascaded analysis of signal and noise propagation through a heterogeneous breast model.

    PubMed

    Mainprize, James G; Yaffe, Martin J

    2010-10-01

    The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.

  17. Are Imaging and Lesioning Convergent Methods for Assessing Functional Specialisation? Investigations Using an Artificial Neural Network

    ERIC Educational Resources Information Center

    Thomas, Michael S. C.; Purser, Harry R. M.; Tomlinson, Simon; Mareschal, Denis

    2012-01-01

    This article presents an investigation of the relationship between lesioning and neuroimaging methods of assessing functional specialisation, using synthetic brain imaging (SBI) and lesioning of a connectionist network of past-tense formation. The model comprised two processing "routes": one was a direct route between layers of input and output…

  18. P-glycoprotein (ABCB1) inhibits the influx and increases the efflux of 11C-metoclopramide across the blood-brain barrier: a PET study on non-human primates.

    PubMed

    Auvity, Sylvain; Caillé, Fabien; Marie, Solène; Wimberley, Catriona; Bauer, Martin; Langer, Oliver; Buvat, Irène; Goutal, Sébastien; Tournier, Nicolas

    2018-05-10

    Rationale : PET imaging using radiolabeled high-affinity substrates of P-glycoprotein (ABCB1) has convincingly revealed the role of this major efflux transporter in limiting the influx of its substrates from blood into the brain across the blood-brain barrier (BBB). Many drugs, such as metoclopramide, are weak ABCB1 substrates and distribute into the brain even when ABCB1 is fully functional. In this study, we used kinetic modeling and validated simplified methods to highlight and quantify the impact of ABCB1 on the BBB influx and efflux of 11 C-metoclopramide, as a model weak ABCB1 substrate, in non-human primates. Methods : The regional brain kinetics of a tracer dose of 11 C-metoclopramide (298 ± 44 MBq) were assessed in baboons using PET without (n = 4) or with intravenous co-infusion of the ABCB1 inhibitor tariquidar (4 mg/kg/h, n = 4). Metabolite-corrected arterial input functions were generated to estimate the regional volume of distribution ( V T ) as well as the influx ( K 1 ) and efflux ( k 2 ) rate constants, using a one-tissue compartment model. Modeling outcome parameters were correlated with image-derived parameters, i.e. area under the curve AUC 0-30 min and AUC 30-60 min (SUV.min) as well as the elimination slope (k E ; min -1 ) from 30 to 60 min of the regional time-activity curves. Results : Tariquidar significantly increased the brain distribution of 11 C-metoclopramide ( V T = 4.3 ± 0.5 mL/cm 3 and 8.7 ± 0.5 mL/cm 3 for baseline and ABCB1 inhibition conditions, respectively, P<0.001), with a 1.28-fold increase in K 1 (P < 0.05) and a 1.64-fold decrease in k 2 (P < 0.001). The effect of tariquidar was homogeneous across different brain regions. The most sensitive parameters to ABCB1 inhibition were V T (2.02-fold increase) and AUC 30-60 min (2.02-fold increase). V T was significantly (P < 0.0001) correlated with AUC 30-60 min (r 2 = 0.95), AUC 0-30 min (r 2 = 0.87) and k E (r 2 = 0.62). Conclusion : 11 C-metoclopramide PET imaging revealed the relative importance of both the influx hindrance and efflux enhancement components of ABCB1 in a relevant model of the human BBB. The overall impact of ABCB1 on drug delivery to the brain can be non-invasively estimated from image-derived outcome parameters without the need for an arterial input function. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  19. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Adherent Raindrop Modeling, Detectionand Removal in Video.

    PubMed

    You, Shaodi; Tan, Robby T; Kawakami, Rei; Mukaigawa, Yasuhiro; Ikeuchi, Katsushi

    2016-09-01

    Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method.

  1. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  2. Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography

    NASA Astrophysics Data System (ADS)

    Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki

    2017-03-01

    We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  3. Dependence of image quality on image operator and noise for optical diffusion tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1998-04-01

    By applying linear perturbation theory to the radiation transport equation, the inverse problem of optical diffusion tomography can be reduced to a set of linear equations, W(mu) equals R, where W is the weight function, (mu) are the cross- section perturbations to be imaged, and R is the detector readings perturbations. We have studied the dependence of image quality on added systematic error and/or random noise in W and R. Tomographic data were collected from cylindrical phantoms, with and without added inclusions, using Monte Carlo methods. Image reconstruction was accomplished using a constrained conjugate gradient descent method. Result show that accurate images containing few artifacts are obtained when W is derived from a reference states whose optical thickness matches that of the unknown teste medium. Comparable image quality was also obtained for unmatched W, but the location of the target becomes more inaccurate as the mismatch increases. Results of the noise study show that image quality is much more sensitive to noise in W than in R, and the impact of noise increase with the number of iterations. Images reconstructed after pure noise was substituted for R consistently contain large peaks clustered about the cylinder axis, which was an initially unexpected structure. In other words, random input produces a non- random output. This finding suggests that algorithms sensitive to the evolution of this feature could be developed to suppress noise effects.

  4. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    PubMed

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods

    PubMed Central

    Hancock, Matthew C.; Magnan, Jerry F.

    2016-01-01

    Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453

  6. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods.

    PubMed

    Hancock, Matthew C; Magnan, Jerry F

    2016-10-01

    In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  7. Models for forecasting energy use in the US farm sector

    NASA Astrophysics Data System (ADS)

    Christensen, L. R.

    1981-07-01

    Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.

  8. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  9. Analytically-derived sensitivities in one-dimensional models of solute transport in porous media

    USGS Publications Warehouse

    Knopman, D.S.

    1987-01-01

    Analytically-derived sensitivities are presented for parameters in one-dimensional models of solute transport in porous media. Sensitivities were derived by direct differentiation of closed form solutions for each of the odel, and by a time integral method for two of the models. Models are based on the advection-dispersion equation and include adsorption and first-order chemical decay. Boundary conditions considered are: a constant step input of solute, constant flux input of solute, and exponentially decaying input of solute at the upstream boundary. A zero flux is assumed at the downstream boundary. Initial conditions include a constant and spatially varying distribution of solute. One model simulates the mixing of solute in an observation well from individual layers in a multilayer aquifer system. Computer programs produce output files compatible with graphics software in which sensitivities are plotted as a function of either time or space. (USGS)

  10. Optimum free energy in the reference functional approach for the integral equations theory

    NASA Astrophysics Data System (ADS)

    Ayadim, A.; Oettel, M.; Amokrane, S.

    2009-03-01

    We investigate the question of determining the bulk properties of liquids, required as input for practical applications of the density functional theory of inhomogeneous systems, using density functional theory itself. By considering the reference functional approach in the test particle limit, we derive an expression of the bulk free energy that is consistent with the closure of the Ornstein-Zernike equations in which the bridge functions are obtained from the reference system bridge functional. By examining the connection between the free energy functional and the formally exact bulk free energy, we obtain an improved expression of the corresponding non-local term in the standard reference hypernetted chain theory derived by Lado. In this way, we also clarify the meaning of the recently proposed criterion for determining the optimum hard-sphere diameter in the reference system. This leads to a theory in which the sole input is the reference system bridge functional both for the homogeneous system and the inhomogeneous one. The accuracy of this method is illustrated with the standard case of the Lennard-Jones fluid and with a Yukawa fluid with very short range attraction.

  11. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.

  12. Diagnostic accuracy of dynamic contrast-enhanced MR imaging using a phase-derived vascular input function in the preoperative grading of gliomas.

    PubMed

    Nguyen, T B; Cron, G O; Mercier, J F; Foottit, C; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Caudrelier, J M; Sinclair, J; Hogan, M J; Thornhill, R E; Cameron, I G

    2012-09-01

    The accuracy of tumor plasma volume and K(trans) estimates obtained with DCE MR imaging may have inaccuracies introduced by a poor estimation of the VIF. In this study, we evaluated the diagnostic accuracy of a novel technique by using a phase-derived VIF and "bookend" T1 measurements in the preoperative grading of patients with suspected gliomas. This prospective study included 46 patients with a new pathologically confirmed diagnosis of glioma. Both magnitude and phase images were acquired during DCE MR imaging for estimates of K(trans)_φ and V(p_)φ (calculated from a phase-derived VIF and bookend T1 measurements) as well as K(trans)_SI and V(p_)SI (calculated from a magnitude-derived VIF without T1 measurements). Median K(trans)_φ values were 0.0041 minutes(-1) (95 CI, 0.00062-0.033), 0.031 minutes(-1) (0.011-0.150), and 0.088 minutes(-1) (0.069-0.110) for grade II, III, and IV gliomas, respectively (P ≤ .05 for each). Median V(p_)φ values were 0.64 mL/100 g (0.06-1.40), 0.98 mL/100 g (0.34-2.20), and 2.16 mL/100 g (1.8-3.1) with P = .15 between grade II and III gliomas and P = .015 between grade III and IV gliomas. In differentiating low-grade from high-grade gliomas, AUCs for K(trans)_φ, V(p_φ), K(trans)_SI, and V(p_)SI were 0.87 (0.73-1), 0.84 (0.69-0.98), 0.81 (0.59-1), and 0.84 (0.66-0.91). The differences between the AUCs were not statistically significant. K(trans)_φ and V(p_)φ are parameters that can help in differentiating low-grade from high-grade gliomas.

  13. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  14. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  15. Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Weaver, Aaron S.

    2003-01-01

    Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.

  16. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  17. Human Systems Integration: Requirements and Functional Decomposition

    NASA Technical Reports Server (NTRS)

    Berson, Barry; Gershzohn, Gary; Boltz, Laura; Wolf, Russ; Schultz, Mike

    2005-01-01

    This deliverable was intended as an input to the Access 5 Policy and Simulation Integrated Product Teams. This document contains high-level pilot functionality for operations in the National Airspace System above FL430. Based on the derived pilot functions the associated pilot information and control requirements are given.

  18. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  19. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.; Marcy, Peter W.

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  1. Semi-globally input-to-state stable controller design for flexible spacecraft attitude stabilization under bounded disturbances

    NASA Astrophysics Data System (ADS)

    Hu, Qinglei

    2010-02-01

    Semi-globally input-to-state stable (ISS) control law is derived for flexible spacecraft attitude maneuvers in the presence of parameter uncertainties and external disturbances. The modified rodrigues parameters (MRP) are used as the kinematic variables since they are nonsingular for all possible rotations. This novel simple control is a proportional-plus-derivative (PD) type controller plus a sign function through a special Lyapunov function construction involving the sum of quadratic terms in the angular velocities, kinematic parameters, modal variables and the cross state weighting. A sufficient condition under which this nonlinear PD-type control law can render the system semi-globally input-to-state stable is provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. In addition to detailed derivations of the new controllers design and a rigorous sketch of all the associated stability and attitude convergence proofs, extensive simulation studies have been conducted to validate the design and the results are presented to highlight the ensuring closed-loop performance benefits when compared with the conventional control schemes.

  2. Automatic Feature Extraction System.

    DTIC Science & Technology

    1982-12-01

    exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and

  3. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    PubMed

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  4. The Effects of a Change in the Variability of Irrigation Water

    NASA Astrophysics Data System (ADS)

    Lyon, Kenneth S.

    1983-10-01

    This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."

  5. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  6. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  7. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  8. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  9. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements

    NASA Astrophysics Data System (ADS)

    Elliott, Jonathan T.; Wright, Eric A.; Tichauer, Kenneth M.; Diop, Mamadou; Morrison, Laura B.; Pogue, Brian W.; Lee, Ting-Yim; St. Lawrence, Keith

    2012-12-01

    In many cases, kinetic modeling requires that the arterial input function (AIF)—the time-dependent arterial concentration of a tracer—be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO2) following a bolus injection of a light-absorbing dye. In other words, the change in SaO2 can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  10. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements.

    PubMed

    Elliott, Jonathan T; Wright, Eric A; Tichauer, Kenneth M; Diop, Mamadou; Morrison, Laura B; Pogue, Brian W; Lee, Ting-Yim; St Lawrence, Keith

    2012-12-21

    In many cases, kinetic modeling requires that the arterial input function (AIF)--the time-dependent arterial concentration of a tracer--be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO₂) following a bolus injection of a light-absorbing dye. In other words, the change in SaO₂ can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  11. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  12. On the way to a microscopic derivation of covariant density functionals in nuclei

    NASA Astrophysics Data System (ADS)

    Ring, Peter

    2018-02-01

    Several methods are discussed to derive covariant density functionals from the microscopic input of bare nuclear forces. In a first step there are semi-microscopic functionals, which are fitted to ab-initio calculations of nuclear matter and depend in addition on very few phenomenological parameters. They are able to describe nuclear properties with the same precision as fully phenomenological functionals. In a second step we present first relativistic Brueckner-Hartree-Fock calculations in finite nuclei in order to study properties of such functionals, which cannot be obtained from nuclear matter calculations.

  13. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    DOT National Transportation Integrated Search

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  14. Energy Input Flux in the Global Quiet-Sun Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Cormack, Cecilia; Vásquez, Alberto M.; López Fuentes, Marcelo

    We present first results of a novel technique that provides, for the first time, constraints on the energy input flux at the coronal base ( r ∼ 1.025 R {sub ⊙}) of the quiet Sun at a global scale. By combining differential emission measure tomography of EUV images, with global models of the coronal magnetic field, we estimate the energy input flux at the coronal base that is required to maintain thermodynamically stable structures. The technique is described in detail and first applied to data provided by the Extreme Ultraviolet Imager instrument, on board the Solar TErrestrial RElations Observatory mission,more » and the Atmospheric Imaging Assembly instrument, on board the Solar Dynamics Observatory mission, for two solar rotations with different levels of activity. Our analysis indicates that the typical energy input flux at the coronal base of magnetic loops in the quiet Sun is in the range ∼0.5–2.0 × 10{sup 5} (erg s{sup −1} cm{sup −2}), depending on the structure size and level of activity. A large fraction of this energy input, or even its totality, could be accounted for by Alfvén waves, as shown by recent independent observational estimates derived from determinations of the non-thermal broadening of spectral lines in the coronal base of quiet-Sun regions. This new tomography product will be useful for the validation of coronal heating models in magnetohydrodinamic simulations of the global corona.« less

  15. Time-series analysis of energetic electron fluxes (1. 2 - 16 MeV) at geosynchronous altitude. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpin, M.P.

    This project used a Box and Jenkins time-series analysis of energetic electron fluxes measured at geosynchronous orbit in an effort to derive prediction models for the flux in each of five energy channels. In addition, the technique of transfer function modeling described by Box and Jenkins was used in an attempt to derive input-output relationships between the flux channels (viewed as the output) and the solar-wind speed or interplanetary magnetic field (IMF) north-south component, Bz, (viewed as the input). The transfer function modeling was done in order to investigate the theoretical dynamic relationship which is believed to exist between themore » solar wind, the IMF Bz, and the energetic electron flux in the magnetosphere. The models derived from the transfer-function techniques employed were also intended to be used in the prediction of flux values. The results from this study indicate that the energetic electron flux changes in the various channels are dependent on more than simply the solar-wind speed or the IMF Bz.« less

  16. Dynamic cardiac PET imaging: extraction of time-activity curves using ICA and a generalized Gaussian distribution model.

    PubMed

    Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi

    2013-01-01

    Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.

  17. Basic Economic Principles

    NASA Technical Reports Server (NTRS)

    Tideman, T. N.

    1972-01-01

    An economic approach to design efficient transportation systems involves maximizing an objective function that reflects both goals and costs. A demand curve can be derived by finding the quantities of a good that solve the maximization problem as one varies the price of that commodity, holding income and the prices of all other goods constant. A supply curve is derived by applying the idea of profit maximization of firms. The production function determines the relationship between input and output.

  18. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  19. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  20. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  1. Geographic Resources Analysis Support System (GRASS) Version 4.0 User’s Reference Manual

    DTIC Science & Technology

    1992-06-01

    inpur-image need not be square; before processing, the X and Y dimensions of the input-image are padded with zeroes to the next highest power of two in...structures an input kowledge /control script with an appropriate combination of map layer category values (GRASS raster map layers that contain data on...F cos(x) cosine of x (x is in degrees) F exp(x) exponential function of x F exp(x,y) x to the power y F float(x) convert x to floating point F if

  2. Theory of nonstationary Hawkes processes

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Neta Ravid; Burak, Yoram

    2017-12-01

    We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.

  3. Computer-aided diagnosis of prostate cancer using multi-parametric MRI: comparison between PUN and Tofts models

    NASA Astrophysics Data System (ADS)

    Mazzetti, S.; Giannini, V.; Russo, F.; Regge, D.

    2018-05-01

    Computer-aided diagnosis (CAD) systems are increasingly being used in clinical settings to report multi-parametric magnetic resonance imaging (mp-MRI) of the prostate. Usually, CAD systems automatically highlight cancer-suspicious regions to the radiologist, reducing reader variability and interpretation errors. Nevertheless, implementing this software requires the selection of which mp-MRI parameters can best discriminate between malignant and non-malignant regions. To exploit functional information, some parameters are derived from dynamic contrast-enhanced (DCE) acquisitions. In particular, much CAD software employs pharmacokinetic features, such as K trans and k ep, derived from the Tofts model, to estimate a likelihood map of malignancy. However, non-pharmacokinetic models can be also used to describe DCE-MRI curves, without any requirement for prior knowledge or measurement of the arterial input function, which could potentially lead to large errors in parameter estimation. In this work, we implemented an empirical function derived from the phenomenological universalities (PUN) class to fit DCE-MRI. The parameters of the PUN model are used in combination with T2-weighted and diffusion-weighted acquisitions to feed a support vector machine classifier to produce a voxel-wise malignancy likelihood map of the prostate. The results were all compared to those for a CAD system based on Tofts pharmacokinetic features to describe DCE-MRI curves, using different quality aspects of image segmentation, while also evaluating the number and size of false positive (FP) candidate regions. This study included 61 patients with 70 biopsy-proven prostate cancers (PCa). The metrics used to evaluate segmentation quality between the two CAD systems were not statistically different, although the PUN-based CAD reported a lower number of FP, with reduced size compared to the Tofts-based CAD. In conclusion, the CAD software based on PUN parameters is a feasible means with which to detect PCa, without affecting segmentation quality, and hence it could be successfully applied in clinical settings, improving the automated diagnosis process and reducing computational complexity.

  4. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  5. Phase and amplitude beam shaping with two deformable mirrors implementing input plane and Fourier plane phase modifications.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Rzasa, John R; Paulson, Daniel A; Davis, Christopher C

    2018-03-20

    We find that ideas in optical image encryption can be very useful for adaptive optics in achieving simultaneous phase and amplitude shaping of a laser beam. An adaptive optics system with simultaneous phase and amplitude shaping ability is very desirable for atmospheric turbulence compensation. Atmospheric turbulence-induced beam distortions can jeopardize the effectiveness of optical power delivery for directed-energy systems and optical information delivery for free-space optical communication systems. In this paper, a prototype adaptive optics system is proposed based on a famous image encryption structure. The major change is to replace the two random phase plates at the input plane and Fourier plane of the encryption system, respectively, with two deformable mirrors that perform on-demand phase modulations. A Gaussian beam is used as an input to replace the conventional image input. We show through theory, simulation, and experiments that the slightly modified image encryption system can be used to achieve arbitrary phase and amplitude beam shaping within the limits of stroke range and influence function of the deformable mirrors. In application, the proposed technique can be used to perform mode conversion between optical beams, generate structured light signals for imaging and scanning, and compensate atmospheric turbulence-induced phase and amplitude beam distortions.

  6. MARVIN: a medical research application framework based on open source software.

    PubMed

    Rudolph, Tobias; Puls, Marc; Anderegg, Christoph; Ebert, Lars; Broehan, Martina; Rudin, Adrian; Kowal, Jens

    2008-08-01

    This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

  7. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-12-01

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  8. Motion-gated acquisition for in vivo optical imaging

    PubMed Central

    Gioux, Sylvain; Ashitate, Yoshitomo; Hutteman, Merlijn; Frangioni, John V.

    2009-01-01

    Wide-field continuous wave fluorescence imaging, fluorescence lifetime imaging, frequency domain photon migration, and spatially modulated imaging have the potential to provide quantitative measurements in vivo. However, most of these techniques have not yet been successfully translated to the clinic due to challenging environmental constraints. In many circumstances, cardiac and respiratory motion greatly impair image quality and∕or quantitative processing. To address this fundamental problem, we have developed a low-cost, field-programmable gate array–based, hardware-only gating device that delivers a phase-locked acquisition window of arbitrary delay and width that is derived from an unlimited number of pseudo-periodic and nonperiodic input signals. All device features can be controlled manually or via USB serial commands. The working range of the device spans the extremes of mouse electrocardiogram (1000 beats per minute) to human respiration (4 breaths per minute), with timing resolution ⩽0.06%, and jitter ⩽0.008%, of the input signal period. We demonstrate the performance of the gating device, including dramatic improvements in quantitative measurements, in vitro using a motion simulator and in vivo using near-infrared fluorescence angiography of beating pig heart. This gating device should help to enable the clinical translation of promising new optical imaging technologies. PMID:20059276

  9. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  10. Quantum dot-based local field imaging reveals plasmon-based interferometric logic in silver nanowire networks.

    PubMed

    Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing

    2011-02-09

    We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.

  11. A Theoretical Exploration of the Function of the Image in Communication.

    ERIC Educational Resources Information Center

    Schrag, Robert L.

    1974-01-01

    The mass media provide a flood of information about people, ideas, and products. With all this input, the individual is often hard pressed to sort these images into a meaningful framework. This article synthesizes some of the concepts of Kenneth Boulding and Daniel Boorstin concerning the image and its effects on the structure of our lives and…

  12. Spatially resolved assessment of hepatic function using 99mTc-IDA SPECT

    PubMed Central

    Wang, Hesheng; Cao, Yue

    2013-01-01

    Purpose: 99mTc-iminodiacetic acid (IDA) hepatobiliary imaging is usually quantified for hepatic function on the entire liver or regions of interest (ROIs) in the liver. The authors presented a method to estimate the hepatic extraction fraction (HEF) voxel-by-voxel from single-photon emission computed tomography (SPECT)/CT with a 99mTc-labeled IDA agent of mebrofenin and evaluated the spatially resolved HEF measurements with an independent physiological measurement. Methods: Fourteen patients with intrahepatic cancers were treated with radiation therapy (RT) and imaged by 99mTc-mebrofenin SPECT before and 1 month after RT. The dynamic SPECT volumes were with a resolution of 3.9 × 3.9 × 2.5 mm3. Throughout the whole liver with approximate 50 000 voxels, voxelwise HEF quantifications were estimated and compared between using arterial input function (AIF) from the heart and using vascular input function (VIF) from the spleen. The correlation between mean of the HEFs over the nontumor liver tissue and the overall liver function measured by Indocyanine green clearance half-time (T1/2) was assessed. Variation of the voxelwise estimation was evaluated in ROIs drawn in relatively homogeneous regions of the livers. The authors also examined effects of the time range parameter on the voxelwise HEF quantification. Results: Mean of the HEFs over the liver estimated using AIF significantly correlated with the physiological measurement T1/2 (r = 0.52, p = 0.0004), and the correlation was greatly improved by using VIF (r = 0.79, p < 0.0001). The parameter of time range for the retention phase did not lead to a significant difference in the means of the HEFs in the ROIs. Using VIF and a retention phase time range of 7–30 min, the relative variation of the voxelwise HEF in the ROIs was 10% ± 6% of respective mean HEF. Conclusions: The voxelwise HEF derived from 99mTc-IDA SPECT by the deconvolution analysis is feasible to assess the spatial distribution of hepatic function in the liver. PMID:24007177

  13. Comparison of first pass bolus AIFs extracted from sequential 18F-FDG PET and DSC-MRI of mice

    NASA Astrophysics Data System (ADS)

    Evans, Eleanor; Sawiak, Stephen J.; Ward, Alexander O.; Buonincontri, Guido; Hawkes, Robert C.; Adrian Carpenter, T.

    2014-01-01

    Accurate kinetic modelling of in vivo physiological function using positron emission tomography (PET) requires determination of the tracer time-activity curve in plasma, known as the arterial input function (AIF). The AIF is usually determined by invasive blood sampling methods, which are prohibitive in murine studies due to low total blood volumes. Extracting AIFs from PET images is also challenging due to large partial volume effects (PVE). We hypothesise that in combined PET with magnetic resonance imaging (PET/MR), a co-injected bolus of MR contrast agent and PET ligand can be tracked using fast MR acquisitions. This protocol would allow extraction of a MR AIF from MR contrast agent concentration-time curves, at higher spatial and temporal resolution than an image-derived PET AIF. A conversion factor could then be applied to the MR AIF for use in PET kinetic analysis. This work has compared AIFs obtained from sequential DSC-MRI and PET with separate injections of gadolinium contrast agent and 18F-FDG respectively to ascertain the technique‧s validity. An automated voxel selection algorithm was employed to improve MR AIF reproducibility. We found that MR and PET AIFs displayed similar character in the first pass, confirmed by gamma variate fits (p<0.02). MR AIFs displayed reduced PVE compared to PET AIFs, indicating their potential use in PET/MR studies.

  14. Comparison of first pass bolus AIFs extracted from sequential 18F-FDG PET and DSC-MRI of mice.

    PubMed

    Evans, Eleanor; Sawiak, Stephen J; Ward, Alexander O; Buonincontri, Guido; Hawkes, Robert C; Carpenter, T Adrian

    2014-01-11

    Accurate kinetic modelling of in vivo physiological function using positron emission tomography (PET) requires determination of the tracer time-activity curve in plasma, known as the arterial input function (AIF). The AIF is usually determined by invasive blood sampling methods, which are prohibitive in murine studies due to low total blood volumes. Extracting AIFs from PET images is also challenging due to large partial volume effects (PVE). We hypothesise that in combined PET with magnetic resonance imaging (PET/MR), a co-injected bolus of MR contrast agent and PET ligand can be tracked using fast MR acquisitions. This protocol would allow extraction of a MR AIF from MR contrast agent concentration-time curves, at higher spatial and temporal resolution than an image-derived PET AIF. A conversion factor could then be applied to the MR AIF for use in PET kinetic analysis. This work has compared AIFs obtained from sequential DSC-MRI and PET with separate injections of gadolinium contrast agent and 18 F-FDG respectively to ascertain the technique's validity. An automated voxel selection algorithm was employed to improve MR AIF reproducibility. We found that MR and PET AIFs displayed similar character in the first pass, confirmed by gamma variate fits (p<0.02). MR AIFs displayed reduced PVE compared to PET AIFs, indicating their potential use in PET/MR studies.

  15. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  16. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  17. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  18. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  19. Strategies for mapping synaptic inputs on dendrites in vivo by combining two-photon microscopy, sharp intracellular recording, and pharmacology

    PubMed Central

    Levy, Manuel; Schramm, Adrien E.; Kara, Prakash

    2012-01-01

    Uncovering the functional properties of individual synaptic inputs on single neurons is critical for understanding the computational role of synapses and dendrites. Previous studies combined whole-cell patch recording to load neurons with a fluorescent calcium indicator and two-photon imaging to map subcellular changes in fluorescence upon sensory stimulation. By hyperpolarizing the neuron below spike threshold, the patch electrode ensured that changes in fluorescence associated with synaptic events were isolated from those caused by back-propagating action potentials. This technique holds promise for determining whether the existence of unique cortical feature maps across different species may be associated with distinct wiring diagrams. However, the use of whole-cell patch for mapping inputs on dendrites is challenging in large mammals, due to brain pulsations and the accumulation of fluorescent dye in the extracellular milieu. Alternatively, sharp intracellular electrodes have been used to label neurons with fluorescent dyes, but the current passing capabilities of these high impedance electrodes may be insufficient to prevent spiking. In this study, we tested whether sharp electrode recording is suitable for mapping functional inputs on dendrites in the cat visual cortex. We compared three different strategies for suppressing visually evoked spikes: (1) hyperpolarization by intracellular current injection, (2) pharmacological blockade of voltage-gated sodium channels by intracellular QX-314, and (3) GABA iontophoresis from a perisomatic electrode glued to the intracellular electrode. We found that functional inputs on dendrites could be successfully imaged using all three strategies. However, the best method for preventing spikes was GABA iontophoresis with low currents (5–10 nA), which minimally affected the local circuit. Our methods advance the possibility of determining functional connectivity in preparations where whole-cell patch may be impractical. PMID:23248588

  20. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    PubMed

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  2. Random phase encoding for optical security

    NASA Astrophysics Data System (ADS)

    Wang, RuiKang K.; Watson, Ian A.; Chatwin, Christopher R.

    1996-09-01

    A new optical encoding method for security applications is proposed. The encoded image (encrypted into the security products) is merely a random phase image statistically and randomly generated by a random number generator using a computer, which contains no information from the reference pattern (stored for verification) or the frequency plane filter (a phase-only function for decoding). The phase function in the frequency plane is obtained using a modified phase retrieval algorithm. The proposed method uses two phase-only functions (images) at both the input and frequency planes of the optical processor leading to maximum optical efficiency. Computer simulation shows that the proposed method is robust for optical security applications.

  3. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  4. Uncertainty in Measurement: Procedures for Determining Uncertainty With Application to Clinical Laboratory Calculations.

    PubMed

    Frenkel, Robert B; Farrance, Ian

    2018-01-01

    The "Guide to the Expression of Uncertainty in Measurement" (GUM) is the foundational document of metrology. Its recommendations apply to all areas of metrology including metrology associated with the biomedical sciences. When the output of a measurement process depends on the measurement of several inputs through a measurement equation or functional relationship, the propagation of uncertainties in the inputs to the uncertainty in the output demands a level of understanding of the differential calculus. This review is intended as an elementary guide to the differential calculus and its application to uncertainty in measurement. The review is in two parts. In Part I, Section 3, we consider the case of a single input and introduce the concepts of error and uncertainty. Next we discuss, in the following sections in Part I, such notions as derivatives and differentials, and the sensitivity of an output to errors in the input. The derivatives of functions are obtained using very elementary mathematics. The overall purpose of this review, here in Part I and subsequently in Part II, is to present the differential calculus for those in the medical sciences who wish to gain a quick but accurate understanding of the propagation of uncertainties. © 2018 Elsevier Inc. All rights reserved.

  5. SAR image segmentation using skeleton-based fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Cao, Yun Yi; Chen, Yan Qiu

    2003-06-01

    SAR image segmentation can be converted to a clustering problem in which pixels or small patches are grouped together based on local feature information. In this paper, we present a novel framework for segmentation. The segmentation goal is achieved by unsupervised clustering upon characteristic descriptors extracted from local patches. The mixture model of characteristic descriptor, which combines intensity and texture feature, is investigated. The unsupervised algorithm is derived from the recently proposed Skeleton-Based Data Labeling method. Skeletons are constructed as prototypes of clusters to represent arbitrary latent structures in image data. Segmentation using Skeleton-Based Fuzzy Clustering is able to detect the types of surfaces appeared in SAR images automatically without any user input.

  6. Combustion-derived substances in deep basins of Puget Sound: historical inputs from fossil fuel and biomass combustion.

    PubMed

    Kuo, Li-Jung; Louchouarn, Patrick; Herbert, Bruce E; Brandenberger, Jill M; Wade, Terry L; Crecelius, Eric

    2011-04-01

    Reconstructions of 250 years historical inputs of two distinct types of black carbon (soot/graphitic black carbon (GBC) and char-BC) were conducted on sediment cores from two basins of the Puget Sound, WA. Signatures of polycyclic aromatic hydrocarbons (PAHs) were also used to support the historical reconstructions of BC to this system. Down-core maxima in GBC and combustion-derived PAHs occurred in the 1940s in the cores from the Puget Sound Main Basin, whereas in Hood Canal such peak was observed in the 1970s, showing basin-specific differences in inputs of combustion byproducts. This system showed relatively higher inputs from softwood combustion than the northeastern U.S. The historical variations in char-BC concentrations were consistent with shifts in climate indices, suggesting an influence of climate oscillations on wildfire events. Environmental loading of combustion byproducts thus appears as a complex function of urbanization, fuel usage, combustion technology, environmental policies, and climate conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye Jia; Lawrence Berkeley Laboratory, Berkeley, California 94720-8250; Li Youhong

    Theoretical predictions indicate that ordered alloys can spontaneously develop a steady-state nanoscale microstructure when irradiated with energetic particles. This behavior derives from a dynamical competition between disordering in cascades and thermally activated reordering, which leads to self-organization of the chemical order parameter. We test this possibility by combining molecular dynamics (MD) and kinetic Monte Carlo (KMC) simulations. We first generate realistic distributions of disordered zones for Ni{sub 3}Al irradiated with 70 keV He and 1 MeV Kr ions using MD and then input this data into KMC to obtain predictions of steady state microstructures as a function of the irradiationmore » flux. Nanoscale patterning is observed for Kr ion irradiations but not for He ion irradiations. We illustrate, moreover, using image simulations of these KMC microstructures, that high-resolution transmission electron microscopy can be employed to identify nanoscale patterning. Finally, we indicate how this method could be used to synthesize functional thin films, with potential for magnetic applications.« less

  8. Dynamic Contrast-enhanced MR Imaging in Renal Cell Carcinoma: Reproducibility of Histogram Analysis on Pharmacokinetic Parameters

    PubMed Central

    Wang, Hai-yi; Su, Zi-hua; Xu, Xiao; Sun, Zhi-peng; Duan, Fei-xue; Song, Yuan-yuan; Li, Lu; Wang, Ying-wei; Ma, Xin; Guo, Ai-tao; Ma, Lin; Ye, Hui-yi

    2016-01-01

    Pharmacokinetic parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) have been increasingly used to evaluate the permeability of tumor vessel. Histogram metrics are a recognized promising method of quantitative MR imaging that has been recently introduced in analysis of DCE-MRI pharmacokinetic parameters in oncology due to tumor heterogeneity. In this study, 21 patients with renal cell carcinoma (RCC) underwent paired DCE-MRI studies on a 3.0 T MR system. Extended Tofts model and population-based arterial input function were used to calculate kinetic parameters of RCC tumors. Mean value and histogram metrics (Mode, Skewness and Kurtosis) of each pharmacokinetic parameter were generated automatically using ImageJ software. Intra- and inter-observer reproducibility and scan–rescan reproducibility were evaluated using intra-class correlation coefficients (ICCs) and coefficient of variation (CoV). Our results demonstrated that the histogram method (Mode, Skewness and Kurtosis) was not superior to the conventional Mean value method in reproducibility evaluation on DCE-MRI pharmacokinetic parameters (K trans & Ve) in renal cell carcinoma, especially for Skewness and Kurtosis which showed lower intra-, inter-observer and scan-rescan reproducibility than Mean value. Our findings suggest that additional studies are necessary before wide incorporation of histogram metrics in quantitative analysis of DCE-MRI pharmacokinetic parameters. PMID:27380733

  9. Imaging performance of annular apertures. IV - Apodization and point spread functions. V - Total and partial energy integral functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1983-01-01

    Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.

  10. Effect of torso flexion on the lumbar torso extensor muscle sagittal plane moment arms.

    PubMed

    Jorgensen, Michael J; Marras, William S; Gupta, Purnendu; Waters, Thomas R

    2003-01-01

    Accurate anatomical inputs for biomechanical models are necessary for valid estimates of internal loading. The magnitude of the moment arm of the lumbar erector muscle group is known to vary as a function of such variables as gender. Anatomical evidence indicates that the moment arms decrease during torso flexion. However, moment arm estimates in biomechanical models that account for individual variability have been derived from imaging studies from supine postures. Quantify the sagittal plane moment arms of the lumbar erector muscle group as a function of torso flexion, and identify individual characteristics that are associated with the magnitude of the moment arms as a function of torso flexion. Utilization of a 0.3 Tesla Open magnetic resonance image (MRI) to image and quantify the moment arm of the right erector muscle group as a function of gender and torso flexion. Axial MRI images through and parallel to each of the lumbar intervertebral discs at four torso flexion angles were obtained from 12 male and 12 female subjects in a lateral recumbent posture. Multivariate analysis of variance was used to investigate the differences in the moment arms at different torso flexion angles, whereas hierarchical linear regression was used to investigate associations with individual anthropometric characteristics and spinal posture. The largest decrease in the lumbar erector muscle group moment arm from neutral to 45-degree flexion occurred at the L5-S1 level (9.7% and 8.9% for men and women, respectively). Measures of spinal curvature (L1-S1 lordosis), body mass and trunk characteristics (depth or circumference) were associated with the varying moment arm at most lumbar levels. The sagittal plane moment arms of the lumbar erector muscle mass decrease as the torso flexes forward. The change in moment arms as a function of torso flexion may have an impact on prediction of spinal loading in biomechanical models.

  11. Unsupervised segmentation with dynamical units.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R

    2008-01-01

    In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.

  12. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  13. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  14. Intelligent robotic tracker

    NASA Technical Reports Server (NTRS)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  15. Membrane voltage changes in passive dendritic trees: a tapering equivalent cylinder model.

    PubMed

    Poznański, R R

    1988-01-01

    An exponentially tapering equivalent cylinder model is employed in order to approximate the loss of the dendritic trunk parameter observed from anatomical data on apical and basilar dendrites of CA1 and CA3 hippocampal pyramidal neurons. This model allows dendritic trees with a relative paucity of branching to be treated. In particular, terminal branches are not required to end at the same electrotonic distance. The Laplace transform method is used to obtain analytic expressions for the Green's function corresponding to an instantaneous pulse of current injected at a single point along a tapering equivalent cylinder with sealed ends. The time course of the voltage in response to an arbitrary input is computed using the Green's function in a convolution integral. Examples of current input considered are (1) an infinitesimally brief (Dirac delta function) pulse and (2) a step pulse. It is demonstrated that inputs located on a tapering equivalent cylinder are more effective at the soma than identically placed inputs on a nontapering equivalent cylinder. Asymptotic solutions are derived to enable the voltage response behaviour over both relatively short and long time periods to be analysed. Semilogarithmic plots of these solutions provide a basis for estimating the membrane time constant tau m from experimental transients. Transient voltage decrement from a clamped soma reveals that tapering tends to reduce the error associated with inadequate voltage clamping of the dendritic membrane. A formula is derived which shows that tapering tends to increase the estimate of the electrotonic length parameter L.

  16. Global image analysis to determine suitability for text-based image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).

  17. Image enhancement by non-linear extrapolation in frequency space

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)

    1998-01-01

    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.

  18. Color image enhancement based on particle swarm optimization with Gaussian mixture

    NASA Astrophysics Data System (ADS)

    Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho

    2015-01-01

    This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.

  19. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  20. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  1. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  2. Studies of auroral X-ray imaging from high altitude spacecraft

    NASA Technical Reports Server (NTRS)

    Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.

    1980-01-01

    Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.

  3. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  4. Cerebellum - function (image)

    MedlinePlus

    The cerebellum processes input from other areas of the brain, spinal cord and sensory receptors to provide precise timing ... the skeletal muscular system. A stroke affecting the cerebellum may cause dizziness, nausea, balance and coordination problems.

  5. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  6. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  7. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  8. Optical switches and switching methods

    DOEpatents

    Doty, Michael

    2008-03-04

    A device and method for collecting subject responses, particularly during magnetic imaging experiments and testing using a method such as functional MRI. The device comprises a non-metallic input device which is coupled via fiber optic cables to a computer or other data collection device. One or more optical switches transmit the subject's responses. The input device keeps the subject's fingers comfortably aligned with the switches by partially immobilizing the forearm, wrist, and/or hand of the subject. Also a robust nonmetallic switch, particularly for use with the input device and methods for optical switching.

  9. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  10. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  11. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  12. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  13. Command Filtering-Based Fuzzy Control for Nonlinear Systems With Saturation Input.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Lin, Chong

    2017-09-01

    In this paper, command filtering-based fuzzy control is designed for uncertain multi-input multioutput (MIMO) nonlinear systems with saturation nonlinearity input. First, the command filtering method is employed to deal with the explosion of complexity caused by the derivative of virtual controllers. Then, fuzzy logic systems are utilized to approximate the nonlinear functions of MIMO systems. Furthermore, error compensation mechanism is introduced to overcome the drawback of the dynamics surface approach. The developed method will guarantee all signals of the systems are bounded. The effectiveness and advantages of the theoretic result are obtained by a simulation example.

  14. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks.

    PubMed

    Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.

  15. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks

    PubMed Central

    Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032

  16. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  17. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  18. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  19. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  20. Generalized formula for electron emission taking account of the polaron effect

    NASA Astrophysics Data System (ADS)

    Barengolts, Yu A.; Beril, S. I.; Barengolts, S. A.

    2018-01-01

    A generalized formula is derived for the electron emission current as a function of temperature, field, and electron work function in a metal-dielectric system that takes account of the quantum nature of the image forces. In deriving the formula, the Fermi-Dirac distribution for electrons in a metal and the quantum potential of the image obtained in the context of electron polaron theory are used.

  1. Flight-Determined, Subsonic, Lateral-Directional Stability and Control Derivatives of the Thrust-Vectoring F-18 High Angle of Attack Research Vehicle (HARV), and Comparisons to the Basic F-18 and Predicted Derivatives

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1999-01-01

    The subsonic, lateral-directional, stability and control derivatives of the thrust-vectoring F-1 8 High Angle of Attack Research Vehicle (HARV) are extracted from flight data using a maximum likelihood parameter identification technique. State noise is accounted for in the identification formulation and is used to model the uncommanded forcing functions caused by unsteady aerodynamics. Preprogrammed maneuvers provided independent control surface inputs, eliminating problems of identifiability related to correlations between the aircraft controls and states. The HARV derivatives are plotted as functions of angles of attack between 10deg and 70deg and compared to flight estimates from the basic F-18 aircraft and to predictions from ground and wind tunnel tests. Unlike maneuvers of the basic F-18 aircraft, the HARV maneuvers were very precise and repeatable, resulting in tightly clustered estimates with small uncertainty levels. Significant differences were found between flight and prediction; however, some of these differences may be attributed to differences in the range of sideslip or input amplitude over which a given derivative was evaluated, and to differences between the HARV external configuration and that of the basic F-18 aircraft, upon which most of the prediction was based. Some HARV derivative fairings have been adjusted using basic F-18 derivatives (with low uncertainties) to help account for differences in variable ranges and the lack of HARV maneuvers at certain angles of attack.

  2. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  3. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  4. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-04-01

    Cerebral metabolic rate of oxygen (CMRO2), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15O-labelled water (H152O) and oxygen (15O2). Conventionally, those images are measured with separate scans for three tracers C15O for CBV, H152O for CBF and 15O2 for CMRO2, and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO2 rapidly by sequentially administrating H152O and 15O2 within a short time. Because quantitative CBF and CMRO2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer was larger than half of the first one. Bias and deviation in those values were also compatible to that of the conventional method, when noise was imposed on the arterial TAC. We concluded that the present calculation based methods could be of use for quantitatively calculating CBF and CMRO2 with the DARG approach.

  5. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  6. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  7. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  8. Altered functional connectivity of the amygdaloid input nuclei in adolescents and young adults with autism spectrum disorder: a resting state fMRI study.

    PubMed

    Rausch, Annika; Zhang, Wei; Haak, Koen V; Mennes, Maarten; Hermans, Erno J; van Oort, Erik; van Wingen, Guido; Beckmann, Christian F; Buitelaar, Jan K; Groen, Wouter B

    2016-01-01

    Amygdala dysfunction is hypothesized to underlie the social deficits observed in autism spectrum disorders (ASD). However, the neurobiological basis of this hypothesis is underspecified because it is unknown whether ASD relates to abnormalities of the amygdaloid input or output nuclei. Here, we investigated the functional connectivity of the amygdaloid social-perceptual input nuclei and emotion-regulation output nuclei in ASD versus controls. We collected resting state functional magnetic resonance imaging (fMRI) data, tailored to provide optimal sensitivity in the amygdala as well as the neocortex, in 20 adolescents and young adults with ASD and 25 matched controls. We performed a regular correlation analysis between the entire amygdala (EA) and the whole brain and used a partial correlation analysis to investigate whole-brain functional connectivity uniquely related to each of the amygdaloid subregions. Between-group comparison of regular EA correlations showed significantly reduced connectivity in visuospatial and superior parietal areas in ASD compared to controls. Partial correlation analysis revealed that this effect was driven by the left superficial and right laterobasal input subregions, but not the centromedial output nuclei. These results indicate reduced connectivity of specifically the amygdaloid sensory input channels in ASD, suggesting that abnormal amygdalo-cortical connectivity can be traced down to the socio-perceptual pathways.

  9. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  10. Agro-hydrology and multi-temporal high-resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-12-01

    The growing availability of high-resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the possibilities offered for improving crop-growth dynamic simulation with the distributed agro-hydrological model: topography-based nitrogen transfer and transformation (TNT2). We used a leaf area index (LAI) map series derived from 105 Formosat-2 (F2) images covering the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated against discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2005-2010 data set (climate, land use, agricultural practices, and discharge and nitrate fluxes at the outlet). Data from the first year (2005) were used to initialize the hydrological model. A priori agricultural practices obtained from an extensive field survey, such as seeding date, crop cultivar, and amount of fertilizer, were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop-field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics using the a priori input parameters displayed temporal shifts from those observed LAI profiles that are irregularly distributed in space (between field crops) and time (between years). By resetting the seeding date at the crop-field level, we have developed an optimization method designed to efficiently minimize this temporal shift and better fit the crop growth against both the spatial observations and crop production. This optimization of simulated LAI has a negligible impact on water budgets at the catchment scale (1 mm yr-1 on average) but a noticeable impact on in-stream nitrogen fluxes (around 12%), which is of interest when considering nitrate stream contamination issues and the objectives of TNT2 modeling. This study demonstrates the potential contribution of the forthcoming high spatial and temporal resolution products from the Sentinel-2 satellite mission for improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  11. Agro-hydrology and multi temporal high resolution remote sensing: toward an explicit spatial processes calibration

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Gascoin, S.; Veloso, A.; Salmon-Monviola, J.; Claverie, M.; Rivalland, V.; Dedieu, G.; Demarez, V.; Ceschia, E.; Probst, J.-L.; Durand, P.; Bustillo, V.

    2014-07-01

    The recent and forthcoming availability of high resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the perspective offered by improving the crop growth dynamic simulation using the distributed agro-hydrological model, Topography based Nitrogen transfer and Transformation (TNT2), using LAI map series derived from 105 Formosat-2 (F2) images during the period 2006-2010. The TNT2 model (Beaujouan et al., 2002), calibrated with discharge and in-stream nitrate fluxes for the period 1985-2001, was tested on the 2006-2010 dataset (climate, land use, agricultural practices, discharge and nitrate fluxes at the outlet). A priori agricultural practices obtained from an extensive field survey such as seeding date, crop cultivar, and fertilizer amount were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics with a priori input parameters showed an temporal shift with observed LAI profiles irregularly distributed in space (between field crops) and time (between years). By re-setting seeding date at the crop field level, we proposed an optimization method to minimize efficiently this temporal shift and better fit the crop growth against the spatial observations as well as crop production. This optimization of simulated LAI has a negligible impact on water budget at the catchment scale (1 mm yr-1 in average) but a noticeable impact on in-stream nitrogen fluxes (around 12%) which is of interest considering nitrate stream contamination issues and TNT2 model objectives. This study demonstrates the contribution of forthcoming high spatial and temporal resolution products of Sentinel-2 satellite mission in improving agro-hydrological modeling by constraining the spatial representation of crop productivity.

  12. Functional recovery of odor representations in regenerated sensory inputs to the olfactory bulb

    PubMed Central

    Cheung, Man C.; Jang, Woochan; Schwob, James E.; Wachowiak, Matt

    2014-01-01

    The olfactory system has a unique capacity for recovery from peripheral damage. After injury to the olfactory epithelium (OE), olfactory sensory neurons (OSNs) regenerate and re-converge on target glomeruli of the olfactory bulb (OB). Thus far, this process has been described anatomically for only a few defined populations of OSNs. Here we characterize this regeneration at a functional level by assessing how odor representations carried by OSN inputs to the OB recover after massive loss and regeneration of the sensory neuron population. We used chronic imaging of mice expressing synaptopHluorin in OSNs to monitor odor representations in the dorsal OB before lesion by the olfactotoxin methyl bromide and after a 12 week recovery period. Methyl bromide eliminated functional inputs to the OB, and these inputs recovered to near-normal levels of response magnitude within 12 weeks. We also found that the functional topography of odor representations recovered after lesion, with odorants evoking OSN input to glomerular foci within the same functional domains as before lesion. At a finer spatial scale, however, we found evidence for mistargeting of regenerated OSN axons onto OB targets, with odorants evoking synaptopHluorin signals in small foci that did not conform to a typical glomerular structure but whose distribution was nonetheless odorant-specific. These results indicate that OSNs have a robust ability to reestablish functional inputs to the OB and that the mechanisms underlying the topography of bulbar reinnervation during development persist in the adult and allow primary sensory representations to be largely restored after massive sensory neuron loss. PMID:24431990

  13. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  14. Cross-Validation of Suspended Sediment Concentrations Derived from Satellite Imagery and Numerical Modeling of the 1997 New Year's Flood on the Feather River, CA

    NASA Astrophysics Data System (ADS)

    Kilham, N. E.

    2009-12-01

    Image analysis was applied to assess suspended sediment concentrations (SSC) predicted by a numerical model of 2D hydraulics and sediment transport (Telemac-2D), coupled to a solver for the advection-diffusion equation (SISYPHE) and representing 18 days of flooding over 70 kilometers of the lower Feather-Yuba Rivers. Sisyphe treats the suspended load as a tracer, removed from the flow if the bed shear velocity, u* is lower than an empirically derived threshold (ud* = 7.8E-3 m s-1). Agreement between model (D50 = 0.03 mm) and image-derived SSC (mg L-1) suggests that image interpretation could prove to be a viable approach for verifying spatially-distributed models of floodplain sediment transport if imagery is acquired for a particular flood and at a sufficient spatial and radiometric resolution. However, remotely derived SSC represents the integrated concentration of suspended sediment at the water surface. Hence, comparing SSC magnitudes derived from imagery and numerical modeling requires that a relationship is first established between the total suspended load and the portion of this load suspended within the optical range of the sensor (e.g., Aalto, 1995). Using the optical depth (0.5 m) determined from radiative transfer modeling, surface SSC measured from a 1/14/97 Landsat TM5 image (30 m) were converted to depth-integrated SSC with the Rouse (1937) equation. Surface concentrations were derived using a look-up table for the sensor to convert endmember fractions obtained from a spectral mixture analysis of the image. A two-endmember model (2.0 and 203 mg L-1) was used, with synthetic endmembers derived from optical and radiative transfer modeling and inversion of field spectra collected from the Sacramento and Feather Rivers and matched to measured SSC values. Remotely sensed SSC patterns were then compared to the Telemac results for the same day and time. Modeled concentrations are a function of both the rating curve boundary conditions, and the transport and deposition calculations. At each of three upstream channel boundaries, hourly SSC was derived from instantaneous discharge and SSC records at USGS gages for winter months (December-April) following dam closure on the Feather, Yuba, and Bear Rivers (r2 = 0.61; r2 = 0.81; r2 = 0.55). Model channel concentrations declined downstream from about 90 mg L-1 to 40 mg L-1 as sediment input was depleted through decanting of river water overbank, advection through floodplain channels, and deposition onto the floodplain. Similar downstream declines in the image values suggest that bed and bank erosion downstream of the major gages did not contribute much new sediment two weeks following the flood peak. Model predicted concentrations agree with image derived concentrations to within 10 mg L-1, although the model predicts a more rapid drawdown of floodplain flow than is apparent from the image. Aalto, R., 1995. Discordance between suspended sediment diffusion theory and observed sediment concentration profiles in rivers. M.S., University of Washington, Seattle, WA. Rouse, H.R., 1937. Modern conceptions of the mechanics of turbulence. Transactions, American Society of Civil Engineers, 102: 463-543.

  15. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  16. Emergence of binocular functional properties in a monocular neural circuit

    PubMed Central

    Ramdya, Pavan; Engert, Florian

    2010-01-01

    Sensory circuits frequently integrate converging inputs while maintaining precise functional relationships between them. For example, in mammals with stereopsis, neurons at the first stages of binocular visual processing show a close alignment of receptive-field properties for each eye. Still, basic questions about the global wiring mechanisms that enable this functional alignment remain unanswered, including whether the addition of a second retinal input to an otherwise monocular neural circuit is sufficient for the emergence of these binocular properties. We addressed this question by inducing a de novo binocular retinal projection to the larval zebrafish optic tectum and examining recipient neuronal populations using in vivo two-photon calcium imaging. Notably, neurons in rewired tecta were predominantly binocular and showed matching direction selectivity for each eye. We found that a model based on local inhibitory circuitry that computes direction selectivity using the topographic structure of both retinal inputs can account for the emergence of this binocular feature. PMID:19160507

  17. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  18. How the type of input function affects the dynamic response of conducting polymer actuators

    NASA Astrophysics Data System (ADS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  19. High-order motor cortex in rats receives somatosensory inputs from the primary motor cortex via cortico-cortical pathways.

    PubMed

    Kunori, Nobuo; Takashima, Ichiro

    2016-12-01

    The motor cortex of rats contains two forelimb motor areas; the caudal forelimb area (CFA) and the rostral forelimb area (RFA). Although the RFA is thought to correspond to the premotor and/or supplementary motor cortices of primates, which are higher-order motor areas that receive somatosensory inputs, it is unknown whether the RFA of rats receives somatosensory inputs in the same manner. To investigate this issue, voltage-sensitive dye (VSD) imaging was used to assess the motor cortex in rats following a brief electrical stimulation of the forelimb. This procedure was followed by intracortical microstimulation (ICMS) mapping to identify the motor representations in the imaged cortex. The combined use of VSD imaging and ICMS revealed that both the CFA and RFA received excitatory synaptic inputs after forelimb stimulation. Further evaluation of the sensory input pathway to the RFA revealed that the forelimb-evoked RFA response was abolished either by the pharmacological inactivation of the CFA or a cortical transection between the CFA and RFA. These results suggest that forelimb-related sensory inputs would be transmitted to the RFA from the CFA via the cortico-cortical pathway. Thus, the present findings imply that sensory information processed in the RFA may be used for the generation of coordinated forelimb movements, which would be similar to the function of the higher-order motor cortex in primates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. IMPROVED DERIVATION OF INPUT FUNCTION IN DYNAMIC MOUSE [18F]FDG PET USING BLADDER RADIOACTIVITY KINETICS

    PubMed Central

    Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng

    2013-01-01

    Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal muscle and liver computed by the Patlak analysis using estimated and measured plasma TACs were in excellent agreement (slope ~ 1; R2 > 0.938). The IF estimated using only the last blood sample acquired at the end of the study and the use of liver TAC as plasma IF provided less reliable results. Conclusions The estimated plasma IFs obtained with the hybrid model agreed well with those derived from arterial blood sampling. Importantly, the proposed method obviates the need of arterial catheterization, making it possible to perform repeated dynamic [18F]FDG PET studies on the same animal. Liver TAC is unsuitable as an input function for absolute quantification of [18F]FDG PET data. PMID:23322346

  1. Cortical Plasticity and Olfactory Function in Early Blindness

    PubMed Central

    Araneda, Rodrigo; Renier, Laurent A.; Rombaux, Philippe; Cuevas, Isabel; De Volder, Anne G.

    2016-01-01

    Over the last decade, functional brain imaging has provided insight to the maturation processes and has helped elucidate the pathophysiological mechanisms involved in brain plasticity in the absence of vision. In case of congenital blindness, drastic changes occur within the deafferented “visual” cortex that starts receiving and processing non visual inputs, including olfactory stimuli. This functional reorganization of the occipital cortex gives rise to compensatory perceptual and cognitive mechanisms that help blind persons achieve perceptual tasks, leading to superior olfactory abilities in these subjects. This view receives support from psychophysical testing, volumetric measurements and functional brain imaging studies in humans, which are presented here. PMID:27625596

  2. Transcranial Assessment and Visualization of Acoustic Cavitation: Modeling and Experimental Validation

    PubMed Central

    Clement, Gregory T.; McDannold, Nathan

    2015-01-01

    The interaction of ultrasonically-controlled microbubble oscillations (acoustic cavitation) with tissues and biological media has been shown to induce a wide range of bioeffects that may have significant impact to therapy and diagnosis of central nervous system diseases and disorders. However, the inherently non-linear microbubble oscillations combined with the micrometer and microsecond scales involved in these interactions and the limited methods to assess and visualize them transcranially hinder both their optimal use and translation to the clinics. To overcome these challenges, we present a noninvasive and clinically relevant framework that combines numerical simulations with multimodality imaging to assess and visualize the microbubble oscillations transcranially. In the present work, acoustic cavitation was studied with an integrated US and MR imaging guided clinical FUS system in non-human primates. This multimodality imaging system allowed us to concurrently induce and visualize acoustic cavitation transcranially. A high-resolution brain CT-scan that allowed us to determine the head acoustic properties (density, speed of sound, and absorption) was also co-registered to the US and MR images. The derived acoustic properties and the location of the targets that were determined by the 3D-CT scans and the post sonication MRI respectively were then used as inputs to two-and three-dimensional Finite Difference Time Domain (2D, 3D-FDTD) simulations that matched the experimental conditions and geometry. At the experimentally-determined target locations, synthetic point sources with pressure amplitude traces derived by either a Gaussian function or the output of a microbubble dynamics model were numerically excited and propagated through the skull towards a virtual US imaging array. Then, using passive acoustic mapping that was refined to incorporate variable speed of sound, we assessed the losses and aberrations induced by the skull as a function of the acoustic emissions recorded by the virtual US imaging array. Next, the simulated passive acoustic maps (PAMs) were compared to experimental PAMs. Finally, using clinical CT and MR imaging as input to the numerical simulations, we evaluated the clinical utility of the proposed framework. The simulations indicated that the diverging pressure waves propagating through the skull lose 95% of their intensity as compared to propagation in water-only. Further, the incorporation of a variable speed of sound to the PAM back-projection algorithm indeed corrected the aberrations introduced by the skull and substantially improved the resolution. More than 94% agreement in the FWHM of the axial and transverse line profiles between the simulations incorporating microbubble emissions and experimentally-determined PAMs was observed. Finally, the results of the 2D simulations that used clinical datasets are promising for the prospective use of transcranial PAM in a human with an 82 mm aperture broadband linear array. Incorporation of variable speed of sound to the PAM back-projection algorithm appeared capable of correcting the aberrations introduced by the human skull. These results suggest that this integrated approach can provide a physically accurate and clinically-relevant framework for developing a comprehensive treatment guidance for therapeutic applications of acoustic cavitation in the brain. Ultimately it may enable the quantification of the emissions and provide more control over this nonlinear process. PMID:25546857

  3. Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters

    NASA Astrophysics Data System (ADS)

    Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.

    2005-12-01

    The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.

  4. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  5. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  6. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  7. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  8. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  9. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  10. Translating PI observing proposals into ALMA observing scripts

    NASA Astrophysics Data System (ADS)

    Liszt, Harvey S.

    2014-08-01

    The ALMA telescope is a complex 66-antenna array working in the specialized domain of mm- and sub-mm aperture synthesis imaging. To make ALMA accessible to technically inexperienced but scientifically expert users, the ALMA Observing Tool (OT) has been developed. Using the OT, scientifically oriented user input is formatted as observing proposals that are packaged for peer-review and assessment of technical feasibility. If accepted, the proposal's scientifically oriented inputs are translated by the OT into scheduling blocks, which function as input to observing scripts for the telescope's online control system. Here I describe the processes and practices by which this translation from PI scientific goals to online control input and schedule block execution actually occurs.

  11. Identifying the arterial input function from dynamic contrast-enhanced magnetic resonance images using an apex-seeking technique

    NASA Astrophysics Data System (ADS)

    Martel, Anne L.

    2004-04-01

    In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.

  12. GREAT: a gradient-based color-sampling scheme for Retinex.

    PubMed

    Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo

    2017-04-01

    Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.

  13. A new variant of Petri net controlled grammars

    NASA Astrophysics Data System (ADS)

    Jan, Nurhidaya Mohamad; Turaev, Sherzod; Fong, Wan Heng; Sarmin, Nor Haniza

    2015-10-01

    A Petri net controlled grammar is a Petri net with respect to a context-free grammar where the successful derivations of the grammar can be simulated using the occurrence sequences of the net. In this paper, we introduce a new variant of Petri net controlled grammars, called a place-labeled Petri net controlled grammar, which is a context-free grammar equipped with a Petri net and a function which maps places of the net to productions of the grammar. The language consists of all terminal strings that can be obtained by parallelly applying multisets of the rules which are the images of the sets of the input places of transitions in a successful occurrence sequence of the Petri net. We study the effect of the different labeling strategies to the computational power and establish lower and upper bounds for the generative capacity of place-labeled Petri net controlled grammars.

  14. Cortical plasticity and preserved function in early blindness

    PubMed Central

    Renier, Laurent; De Volder, Anne G.; Rauschecker, Josef P.

    2013-01-01

    The “neural Darwinism” theory predicts that when one sensory modality is lacking, as in congenital blindness, the target structures are taken over by the afferent inputs from other senses that will promote and control their functional maturation (Edelman, 1993). This view receives support from both cross-modal plasticity experiments in animal models and functional imaging studies in man, which are presented here. PMID:23453908

  15. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  16. An input-to-state stability approach to verify almost global stability of a synchronous-machine-infinite-bus system.

    PubMed

    Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita

    2017-08-13

    Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  17. Scene segmentation of natural images using texture measures and back-propagation

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Phatak, Anil; Chatterji, Gano

    1993-01-01

    Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.

  18. Coherent active polarization control without loss

    NASA Astrophysics Data System (ADS)

    Ye, Yuqian; Hay, Darrick; Shi, Zhimin

    2017-11-01

    We propose a lossless active polarization control mechanism utilizing an anisotropic dielectric medium with two coherent inputs. Using scattering matrix analysis, we derive analytically the required optical properties of the anisotropic medium that can behave as a switchable polarizing beam splitter. We also show that such a designed anisotropic medium can produce linearly polarized light at any azimuthal direction through coherent control of two inputs with a specific polarization state. Furthermore, we present a straightforward design-on-demand procedure of a subwavelength-thick metastructure that can possess the desired optical anisotropy at a flexible working wavelength. Our lossless coherent polarization control technique may lead to fast, broadband and integrated polarization control elements for applications in imaging, spectroscopy, and telecommunication.

  19. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  20. Tensor voting for image correction by global and local intensity alignment.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2005-01-01

    This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.

  1. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    PubMed

    Scheler, Gabriele

    2013-01-01

    We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.

  2. Assessment of liver function in primary biliary cirrhosis using Gd-EOB-DTPA-enhanced liver MRI.

    PubMed

    Nilsson, Henrik; Blomqvist, Lennart; Douglas, Lena; Nordell, Anders; Jonas, Eduard

    2010-10-01

    Gd-EOB-DTPA (gadolinium ethoxybenzyl diethylenetriaminepentaacetic acid) is a gadolinium-based hepatocyte-specific contrast agent for magnetic resonance imaging (MRI). The aim of this study was to determine whether the hepatic uptake and excretion of Gd-EOB-DTPA differ between patients with primary biliary cirrhosis (PBC) and healthy controls, and whether differences could be quantified. Gd-EOB-DTPA-enhanced liver MRI was performed in 20 healthy volunteers and 12 patients with PBC. The uptake of Gd-EOB-DTPA was assessed using traditional semi-quantitative parameters (C(max) , T(max) and T(1/2) ), as well as model-free parameters derived after deconvolutional analysis (hepatic extraction fraction [HEF], input-relative blood flow [irBF] and mean transit time [MTT]). In each individual, all parameters were calculated for each liver segment and the median of the segmental values was used to define a global liver median (GLM). Although the PBC patients had relatively mild disease according to their Model for End-stage Liver Disease (MELD), Child-Pugh and Mayo risk scores, they had significantly lower HEF and shorter MTT values compared with the healthy controls. These differences significantly increased with increasing MELD and Child-Pugh scores. Dynamic hepatocyte-specific contrast-enhanced MRI (DHCE-MRI) has a potential role as an imaging-based liver function test. The high spatial resolution of MRI enables hepatic function to be assessed on segmental and sub-segmental levels. © 2010 International Hepato-Pancreato-Biliary Association.

  3. Improved Software to Browse the Serial Medical Images for Learning

    PubMed Central

    2017-01-01

    The thousands of serial images used for medical pedagogy cannot be included in a printed book; they also cannot be efficiently handled by ordinary image viewer software. The purpose of this study was to provide browsing software to grasp serial medical images efficiently. The primary function of the newly programmed software was to select images using 3 types of interfaces: buttons or a horizontal scroll bar, a vertical scroll bar, and a checkbox. The secondary function was to show the names of the structures that had been outlined on the images. To confirm the functions of the software, 3 different types of image data of cadavers (sectioned and outlined images, volume models of the stomach, and photos of the dissected knees) were inputted. The browsing software was downloadable for free from the homepage (anatomy.co.kr) and available off-line. The data sets provided could be replaced by any developers for their educational achievements. We anticipate that the software will contribute to medical education by allowing users to browse a variety of images. PMID:28581279

  4. Improved Software to Browse the Serial Medical Images for Learning.

    PubMed

    Kwon, Koojoo; Chung, Min Suk; Park, Jin Seo; Shin, Byeong Seok; Chung, Beom Sun

    2017-07-01

    The thousands of serial images used for medical pedagogy cannot be included in a printed book; they also cannot be efficiently handled by ordinary image viewer software. The purpose of this study was to provide browsing software to grasp serial medical images efficiently. The primary function of the newly programmed software was to select images using 3 types of interfaces: buttons or a horizontal scroll bar, a vertical scroll bar, and a checkbox. The secondary function was to show the names of the structures that had been outlined on the images. To confirm the functions of the software, 3 different types of image data of cadavers (sectioned and outlined images, volume models of the stomach, and photos of the dissected knees) were inputted. The browsing software was downloadable for free from the homepage (anatomy.co.kr) and available off-line. The data sets provided could be replaced by any developers for their educational achievements. We anticipate that the software will contribute to medical education by allowing users to browse a variety of images. © 2017 The Korean Academy of Medical Sciences.

  5. A novel bioreactor and culture method drives high yields of platelets from stem cells.

    PubMed

    Avanzi, Mauro P; Oluwadara, Oluwasijibomi E; Cushing, Melissa M; Mitchell, Maxwell L; Fischer, Stephen; Mitchell, W Beau

    2016-01-01

    Platelet (PLT) transfusion is the primary treatment for thrombocytopenia. PLTs are obtained exclusively from volunteer donors, and the PLT product has only a 5-day shelf life, which can limit supply and result in PLT shortages. PLTs derived from stem cells could help to fill this clinical need. However, current culture methods yield far too few PLTs for clinical application. To address this need, a defined, serum-free culture method was designed using a novel bioreactor to increase the yield of PLTs from stem cell-derived megakaryocytes. CD34 cells isolated from umbilical cord blood were expanded with a variety of reagents and on a nanofiber membrane using serum-free medium. These cells were then differentiated into megakaryocytic lineage by culturing with thrombopoietin and stem cell factor in serum-free conditions. Polyploidy was induced by addition of Rho kinase inhibitor or actin polymerization inhibitor to the CD41 cells. A novel bioreactor was developed that recapitulated aspects of the marrow vascular niche. Polyploid megakaryocytes that were subjected to flow in the bioreactor extended proPLTs and shed PLTs, as confirmed by light microscopy, fluorescence imaging, and flow cytometry. CD34 cells were expanded 100-fold. CD41 cells were expanded 100-fold. Up to 100 PLTs per input megakaryocyte were produced from the bioreactor, for an overall yield of 10(6) PLTs per input CD34 cell. The PLTs externalized P-selectin after activation. Functional PLTs can be produced ex vivo on a clinically relevant scale using serum-free culture conditions with a novel stepwise approach and an innovative bioreactor. © 2015 AABB.

  6. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  7. Model free approach to kinetic analysis of real-time hyperpolarized 13C magnetic resonance spectroscopy data.

    PubMed

    Hill, Deborah K; Orton, Matthew R; Mariotti, Erika; Boult, Jessica K R; Panek, Rafal; Jafar, Maysam; Parkes, Harold G; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M S; Beloueche-Babari, Mounia; Robinson, Simon P; Leach, Martin O; Chung, Yuen-Li; Eykyn, Thomas R

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized (13)C metabolic imaging in humans, where measurement of the input function can be problematic.

  8. Model Free Approach to Kinetic Analysis of Real-Time Hyperpolarized 13C Magnetic Resonance Spectroscopy Data

    PubMed Central

    Mariotti, Erika; Boult, Jessica K. R.; Panek, Rafal; Jafar, Maysam; Parkes, Harold G.; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M. S.; Beloueche-Babari, Mounia; Robinson, Simon P.; Leach, Martin O.; Chung, Yuen-Li; Eykyn, Thomas R.

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized 13C metabolic imaging in humans, where measurement of the input function can be problematic. PMID:24023724

  9. Extrapolation of sonic boom pressure signatures by the waveform parameter method

    NASA Technical Reports Server (NTRS)

    Thomas, C. L.

    1972-01-01

    The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.

  10. Modeling the atmospheric chemistry of TICs

    NASA Astrophysics Data System (ADS)

    Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John

    2009-05-01

    An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.

  11. Flight-Determined Subsonic Longitudinal Stability and Control Derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) with Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1997-01-01

    The subsonic longitudinal stability and control derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) are extracted from dynamic flight data using a maximum likelihood parameter identification technique. The technique uses the linearized aircraft equations of motion in their continuous/discrete form and accounts for state and measurement noise as well as thrust-vectoring effects. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft, particularly at high angles of attack. Thrust vectoring was implemented using electrohydraulically-actuated nozzle postexit vanes and a specialized research flight control system. During maneuvers, a control system feature provided independent aerodynamic control surface inputs and independent thrust-vectoring vane inputs, thereby eliminating correlations between the aircraft states and controls. Substantial variations in control excitation and dynamic response were exhibited for maneuvers conducted at different angles of attack. Opposing vane interactions caused most thrust-vectoring inputs to experience some exhaust plume interference and thus reduced effectiveness. The estimated stability and control derivatives are plotted, and a discussion relates them to predicted values and maneuver quality.

  12. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.

    PubMed

    Kiumarsi, Bahare; Lewis, Frank L

    2015-01-01

    This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.

  13. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  14. Holography and noncommutative yang-mills theory

    PubMed

    Li; Wu

    2000-03-06

    In this Letter a recently proposed gravity dual of noncommutative Yang-Mills theory is derived from the relations between closed string moduli and open string moduli recently suggested by Seiberg and Witten. The only new input one needs is a simple form of the running string tension as a function of energy. This derivation provides convincing evidence that string theory integrates with the holographical principle and demonstrates a direct link between noncommutative Yang-Mills theory and holography.

  15. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  16. Optical image encryption method based on incoherent imaging and polarized light encoding

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  17. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  18. Nanoparticles based on quantum dots and a luminol derivative: implications for in vivo imaging of hydrogen peroxide by chemiluminescence resonance energy transfer.

    PubMed

    Lee, Eun Sook; Deepagan, V G; You, Dong Gil; Jeon, Jueun; Yi, Gi-Ra; Lee, Jung Young; Lee, Doo Sung; Suh, Yung Doug; Park, Jae Hyung

    2016-03-18

    Overproduction of hydrogen peroxide is involved in the pathogenesis of inflammatory diseases such as cancer and arthritis. To image hydrogen peroxide via chemiluminescence resonance energy transfer in the near-infrared wavelength range, we prepared quantum dots functionalized with a luminol derivative.

  19. Homeostasis in a feed forward loop gene regulatory motif.

    PubMed

    Antoneli, Fernando; Golubitsky, Martin; Stewart, Ian

    2018-05-14

    The internal state of a cell is affected by inputs from the extra-cellular environment such as external temperature. If some output, such as the concentration of a target protein, remains approximately constant as inputs vary, the system exhibits homeostasis. Special sub-networks called motifs are unusually common in gene regulatory networks (GRNs), suggesting that they may have a significant biological function. Potentially, one such function is homeostasis. In support of this hypothesis, we show that the feed-forward loop GRN produces homeostasis. Here the inputs are subsumed into a single parameter that affects only the first node in the motif, and the output is the concentration of a target protein. The analysis uses the notion of infinitesimal homeostasis, which occurs when the input-output map has a critical point (zero derivative). In model equations such points can be located using implicit differentiation. If the second derivative of the input-output map also vanishes, the critical point is a chair: the output rises roughly linearly, then flattens out (the homeostasis region or plateau), and then starts to rise again. Chair points are a common cause of homeostasis. In more complicated equations or networks, numerical exploration would have to augment analysis. Thus, in terms of finding chairs, this paper presents a proof of concept. We apply this method to a standard family of differential equations modeling the feed-forward loop GRN, and deduce that chair points occur. This function determines the production of a particular mRNA and the resulting chair points are found analytically. The same method can potentially be used to find homeostasis regions in other GRNs. In the discussion and conclusion section, we also discuss why homeostasis in the motif may persist even when the rest of the network is taken into account. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    PubMed

    Markiewicz, Tomasz

    2011-03-30

    The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8 GHz 4 GB RAM server with 768x576 pixel size, 1.28 Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server - 3.5 seconds, at remote analysis - 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system.

  1. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology

    PubMed Central

    2011-01-01

    Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8GHz 4GB RAM server with 768x576 pixel size, 1.28Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server – 3.5 seconds, at remote analysis – 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. Conclusions The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system. PMID:21489188

  2. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Adapting radiotherapy to hypoxic tumours

    NASA Astrophysics Data System (ADS)

    Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag

    2006-10-01

    In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.

  4. Reconstruction of Missing Pixels in Satellite Images Using the Data Interpolating Empirical Orthogonal Function (DINEOF)

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, M.

    2016-02-01

    For coastal and inland waters, complete (in spatial) and frequent satellite measurements are important in order to monitor and understand coastal biological and ecological processes and phenomena, such as diurnal variations. High-frequency images of the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)) derived from the Korean Geostationary Ocean Color Imager (GOCI) provide a unique opportunity to study diurnal variation of the water turbidity in coastal regions of the Bohai Sea, Yellow Sea, and East China Sea. However, there are lots of missing pixels in the original GOCI-derived Kd(490) images due to clouds and various other reasons. Data Interpolating Empirical Orthogonal Function (DINEOF) is a method to reconstruct missing data in geophysical datasets based on Empirical Orthogonal Function (EOF). In this study, the DINEOF is applied to GOCI-derived Kd(490) data in the Yangtze River mouth and the Yellow River mouth regions, the DINEOF reconstructed Kd(490) data are used to fill in the missing pixels, and the spatial patterns and temporal functions of the first three EOF modes are also used to investigate the sub-diurnal variation due to the tidal forcing. In addition, DINEOF method is also applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (SNPP) satellite to reconstruct missing pixels in the daily Kd(490) and chlorophyll-a concentration images, and some application examples in the Chesapeake Bay and the Gulf of Mexico will be presented.

  5. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  6. Optimization of the reference region method for dual pharmacokinetic modeling using Gd-DTPA/MRI and (18) F-FDG/PET.

    PubMed

    Poulin, Éric; Lebel, Réjean; Croteau, Étienne; Blanchette, Marie; Tremblay, Luc; Lecomte, Roger; Bentourkia, M'hamed; Lepage, Martin

    2015-02-01

    The combination of MRI and positron emission tomography (PET) offers new possibilities for the development of novel methodologies. In pharmacokinetic image analysis, the blood concentration of the imaging compound as a function of time, [i.e., the arterial input function (AIF)] is required for MRI and PET. In this study, we tested whether an AIF extracted from a reference region (RR) in MRI can be used as a surrogate for the manually sampled (18) F-FDG AIF for pharmacokinetic modeling. An MRI contrast agent, gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA) and a radiotracer, (18) F-fluorodeoxyglucose ((18) F-FDG), were simultaneously injected in a F98 glioblastoma rat model. A correction to the RR AIF for Gd-DTPA is proposed to adequately represent the manually sampled AIF. A previously published conversion method was applied to convert this AIF into a (18) F-FDG AIF. The tumor metabolic rate of glucose (TMRGlc) calculated with the manually sampled (18) F-FDG AIF, the (18) F-FDG AIF converted from the RR AIF and the (18) F-FDG AIF converted from the corrected RR AIF were found not statistically different (P>0.05). An AIF derived from an RR in MRI can be accurately converted into a (18) F-FDG AIF and used in PET pharmacokinetic modeling. © 2014 Wiley Periodicals, Inc.

  7. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  8. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

    PubMed

    Ho, Kevin I-J; Leung, Chi-Sing; Sum, John

    2010-06-01

    In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

  9. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  10. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  11. BOREAS RSS-8 BIOME-BGC SSA Simulation of Annual Water and Carbon Fluxes

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    The BOREAS RSS-8 team performed research to evaluate the effect of seasonal weather and landcover heterogeneity on boreal forest regional water and carbon fluxes using a process-level ecosystem model, BIOME-BGC, coupled with remote sensing-derived parameter maps of key state variables. This data set contains derived maps of landcover type and crown and stem biomass as model inputs to determine annual evapotranspiration, gross primary production, autotrophic respiration, and net primary productivity within the BOREAS SSA-MSA, at a 30-m spatial resolution. Model runs were conducted over a 3-year period from 1994-1996; images are provided for each of those years. The data are stored in binary image format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  12. The use of remote sensing and linear wave theory to model local wave energy around Alphonse Atoll, Seychelles

    NASA Astrophysics Data System (ADS)

    Hamylton, S.

    2011-12-01

    This paper demonstrates a practical step-wise method for modelling wave energy at the landscape scale using GIS and remote sensing techniques at Alphonse Atoll, Seychelles. Inputs are a map of the benthic surface (seabed) cover, a detailed bathymetric model derived from remotely sensed Compact Airborne Spectrographic Imager (CASI) data and information on regional wave heights. Incident energy at the reef crest around the atoll perimeter is calculated as a function of its deepwater value with wave parameters (significant wave height and period) hindcast in the offshore zone using the WaveWatch III application developed by the National Oceanographic and Atmospheric Administration. Energy modifications are calculated at constant intervals as waves transform over the forereef platform along a series of reef profile transects running into the atoll centre. Factors for shoaling, refraction and frictional attenuation are calculated at each interval for given changes in bathymetry and benthic coverage type and a nominal reduction in absolute energy is incorporated at the reef crest to account for wave breaking. Overall energy estimates are derived for a period of 5 years and related to spatial patterning of reef flat surface cover (sand and seagrass patches).

  13. Characterization of Modified Tapioca Starch Solutions and Their Sprays for High Temperature Coating Applications

    PubMed Central

    Naz, M. Y.; Sulaiman, S. A.; Ariwahjoedi, B.; Shaari, Ku Zilati Ku

    2014-01-01

    The objective of the research was to understand and improve the unusual physical and atomization properties of the complexes/adhesives derived from the tapioca starch by addition of borate and urea. The characterization of physical properties of the synthesized adhesives was carried out by determining the effect of temperature, shear rate, and mass concentration of thickener/stabilizer on the complex viscosity, density, and surface tension. In later stage, phenomenological analyses of spray jet breakup of heated complexes were performed in still air. Using a high speed digital camera, the jet breakup dynamics were visualized as a function of the system input parameters. The further analysis of the grabbed images confirmed the strong influence of the input processing parameters on full cone spray patternation. It was also predicted that the heated starch adhesive solutions generate a dispersed spray pattern by utilizing the partial evaporation of the spraying medium. Below 40°C of heating temperature, the radial spray cone width and angle did not vary significantly with increasing Reynolds and Weber numbers at early injection phases leading to increased macroscopic spray propagation. The discharge coefficient, mean flow rate, and mean flow velocity were significantly influenced by the load pressure but less affected by the temperature. PMID:24592165

  14. Covariant density functional theory: predictive power and first attempts of a microscopic derivation

    NASA Astrophysics Data System (ADS)

    Ring, Peter

    2018-05-01

    We discuss systematic global investigations with modern covariant density functionals. The number of their phenomenological parameters can be reduced considerable by using microscopic input from ab-initio calculations in nuclear matter. The size of the tensor force is still an open problem. Therefore we use the first full relativistic Brueckner-Hartree-Fock calculations in finite nuclear systems in order to study properties of such functionals, which cannot be obtained from nuclear matter calculations.

  15. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  16. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  17. Biometric Data Safeguarding Technologies Analysis and Best Practices

    DTIC Science & Technology

    2011-12-01

    fuzzy vault” scheme proposed by Juels and Sudan. The scheme was designed to encrypt data such that it could be unlocked by similar but inexact matches... designed transform functions. Multifactor Key Generation Multifactor key generation combines a biometric with one or more other inputs, such as a...cooperative, off-angle iris images.  Since the commercialized system is designed for images acquired from a specific, paired acquisition system

  18. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  19. Holographic Associative Memory Employing Phase Conjugation

    NASA Astrophysics Data System (ADS)

    Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.

    1986-12-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  20. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  1. OP-Yield Version 1.00 user's guide

    Treesearch

    Martin W. Ritchie; Jianwei Zhang

    2018-01-01

    OP-Yield is a Microsoft Excel™ spreadsheet with 14 specified user inputs to derive custom yield estimates using the original Oliver and Powers (1978) functions as the foundation. It presents yields for ponderosa pine (Pinus ponderosa Lawson & C. Lawson) plantations in northern California. The basic model forms for dominantand...

  2. Diffusion magnetic resonance imaging: A molecular imaging tool caught between hope, hype and the real world of “personalized oncology”

    PubMed Central

    Mahajan, Abhishek; Deshpande, Sneha S; Thakur, Meenakshi H

    2017-01-01

    “Personalized oncology” is a multi-disciplinary science, which requires inputs from various streams for optimal patient management. Humongous progress in the treatment modalities available and the increasing need to provide functional information in addition to the morphological data; has led to leaping progress in the field of imaging. Magnetic resonance imaging has undergone tremendous progress with various newer MR techniques providing vital functional information and is becoming the cornerstone of “radiomics/radiogenomics”. Diffusion-weighted imaging is one such technique which capitalizes on the tendency of water protons to diffuse randomly in a given system. This technique has revolutionized oncological imaging, by giving vital qualitative and quantitative information regarding tumor biology which helps in detection, characterization and post treatment surveillance of the lesions and challenging the notion that “one size fits all”. It has been applied at various sites with different clinical experience. We hereby present a brief review of this novel functional imaging tool, with its application in “personalized oncology”. PMID:28717412

  3. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  4. Forecasting tidal marsh elevation and habitat change through fusion of Earth observations and a process model

    USGS Publications Warehouse

    Byrd, Kristin B.; Windham-Myers, Lisamarie; Leeuw, Thomas; Downing, Bryan D.; Morris, James T.; Ferner, Matthew C.

    2016-01-01

    Reducing uncertainty in data inputs at relevant spatial scales can improve tidal marsh forecasting models, and their usefulness in coastal climate change adaptation decisions. The Marsh Equilibrium Model (MEM), a one-dimensional mechanistic elevation model, incorporates feedbacks of organic and inorganic inputs to project elevations under sea-level rise scenarios. We tested the feasibility of deriving two key MEM inputs—average annual suspended sediment concentration (SSC) and aboveground peak biomass—from remote sensing data in order to apply MEM across a broader geographic region. We analyzed the precision and representativeness (spatial distribution) of these remote sensing inputs to improve understanding of our study region, a brackish tidal marsh in San Francisco Bay, and to test the applicable spatial extent for coastal modeling. We compared biomass and SSC models derived from Landsat 8, DigitalGlobe WorldView-2, and hyperspectral airborne imagery. Landsat 8-derived inputs were evaluated in a MEM sensitivity analysis. Biomass models were comparable although peak biomass from Landsat 8 best matched field-measured values. The Portable Remote Imaging Spectrometer SSC model was most accurate, although a Landsat 8 time series provided annual average SSC estimates. Landsat 8-measured peak biomass values were randomly distributed, and annual average SSC (30 mg/L) was well represented in the main channels (IQR: 29–32 mg/L), illustrating the suitability of these inputs across the model domain. Trend response surface analysis identified significant diversion between field and remote sensing-based model runs at 60 yr due to model sensitivity at the marsh edge (80–140 cm NAVD88), although at 100 yr, elevation forecasts differed less than 10 cm across 97% of the marsh surface (150–200 cm NAVD88). Results demonstrate the utility of Landsat 8 for landscape-scale tidal marsh elevation projections due to its comparable performance with the other sensors, temporal frequency, and cost. Integration of remote sensing data with MEM should advance regional projections of marsh vegetation change by better parameterizing MEM inputs spatially. Improving information for coastal modeling will support planning for ecosystem services, including habitat, carbon storage, and flood protection.

  5. A second-order frequency-aided digital phase-locked loop for Doppler rate tracking

    NASA Astrophysics Data System (ADS)

    Chie, C. M.

    1980-08-01

    A second-order digital phase-locked loop (DPLL) has a finite lock range which is a function of the frequency of the incoming signal to be tracked. For this reason, it is not capable of tracking an input with Doppler rate for an indefinite period of time. In this correspondence, an analytical expression for the hold-in time is derived. In addition, an all-digital scheme to alleviate this problem is proposed based on the information obtained from estimating the input signal frequency.

  6. Imaging Faults in Carbonate Reservoir using Full Waveform Inversion and Reverse Time Migration of Walkaway VSP Data

    NASA Astrophysics Data System (ADS)

    Takam Takougang, E. M.; Bouzidi, Y.

    2016-12-01

    Multi-offset Vertical Seismic Profile (walkaway VSP) data were collected in an oil field located in a shallow water environment dominated by carbonate rocks, offshore the United Arab Emirates. The purpose of the survey was to provide structural information of the reservoir, around and away from the borehole. Five parallel lines were collected using an air gun at 25 m shot interval and 4 m source depth. A typical recording tool with 20 receivers spaced every 15.1 m, and located in a deviated borehole with an angle varying between 0 and 24 degree from the vertical direction, was used to record the data. The recording tool was deployed at different depths for each line, from 521 m to 2742 m depth. Smaller offsets were used for shallow receivers and larger offsets for deeper receivers. The lines merged to form the input dataset for waveform tomography. The total length of the combined lines was 9 km, containing 1344 shots and 100 receivers in the borehole located half-way down. Acoustic full waveform inversion was applied in the frequency domain to derive a high resolution velocity model. The final velocity model derived after the inversion using the frequencies 5-40 Hz, showed good correlation with velocities estimated from vertical incidence VSP and sonic log, confirming the success of the inversion. The velocity model showed anomalous low values in areas that correlate with known location of hydrocarbon reservoir. Pre-stack depth Reverse time migration was then applied using the final velocity model from waveform inversion and the up-going wavefield from the input data. The final estimated source signature from waveform inversion was used as input source for reverse time migration. To save computational memory and time, every 3 shots were used during reverse time migration and the data were low-pass filtered to 30 Hz. Migration artifacts were attenuated using a second order derivative filter. The final migration image shows a good correlation with the waveform tomography velocity model, and highlights a complex network of faults in the reservoir, that could be useful in understanding fluid and hydrocarbon movements. This study shows that the combination of full waveform tomography and reverse time migration can provide high resolution images that can enhance interpretation and characterization of oil reservoirs.

  7. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  8. Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.

    PubMed

    Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming

    2017-09-01

    Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.

  9. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  10. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  11. A programmable CCD driver circuit for multiphase CCD operation

    NASA Technical Reports Server (NTRS)

    Ewin, Audrey J.; Reed, Kenneth V.

    1989-01-01

    A programmable CCD (charge-coupled device) driver circuit was designed to drive CCDs in multiphased modes. The purpose of the drive electronics is to operate developmental CCD imaging arrays for NASA's tiltable moderate resolution imaging spectrometer (MODIS-T). Five objectives for the driver were considered during its design: (1) the circuit drives CCD electrode voltages between 0 V and +30 V to produce reasonable potential wells, (2) the driving sequence is started with one input signal, (3) the driving sequence is started with one input signal, (4) the circuit allows programming of frame sequences required by arrays of any size, (5) it produces interfacing signals for the CCD and the DTF (detector test facility). Simulation of the driver verified its function with the master clock running up to 10 MHz. This suggests a maximum rate of 400,000 pixels/s. Timing and packaging parameters were verified. The design uses 54 TTL (transistor-transistor logic) chips. Two versions of hardware were fabricated: wirewrap and printed circuit board. Both were verified functionally with a logic analyzer.

  12. Analysis of digital images into energy-angular momentum modes.

    PubMed

    Vicent, Luis Edgar; Wolf, Kurt Bernardo

    2011-05-01

    The measurement of continuous wave fields by a digital (pixellated) screen of sensors can be used to assess the quality of a beam by finding its formant modes. A generic continuous field F(x, y) sampled at an N × N Cartesian grid of point sensors on a plane yields a matrix of values F(q(x), q(y)), where (q(x), q(y)) are integer coordinates. When the approximate rotational symmetry of the input field is important, one may use the sampled Laguerre-Gauss functions, with radial and angular modes (n, m), to analyze them into their corresponding coefficients F(n, m) of energy and angular momentum (E-AM). The sampled E-AM modes span an N²-dimensional space, but are not orthogonal--except for parity. In this paper, we propose the properly orthonormal "Laguerre-Kravchuk" discrete functions Λ(n, m)(q(x), q(y)) as a convenient basis to analyze the sampled beams into their E-AM polar modes, and with them synthesize the input image exactly.

  13. Image Edge Extraction via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)

    2008-01-01

    A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.

  14. Validation of a rapid, semiautomatic image analysis tool for measurement of gastric accommodation and emptying by magnetic resonance imaging

    PubMed Central

    Dixit, Sudeepa; Fox, Mark; Pal, Anupam

    2014-01-01

    Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229

  15. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  16. Synthesis of feedback systems with large plant ignorance for prescribed time domain tolerances

    NASA Technical Reports Server (NTRS)

    Horowitz, I. M.; Sidi, M.

    1971-01-01

    There is given a minimum-phase plant transfer function, with prescribed bounds on its parameter values. The plant is imbedded in a two-degree-of freedom feedback system, which is to be designed such that the system time response to a deterministic input lies within specified boundaries. Subject to the above, the design should be such as to minimize the effect of sensor noise at the input to the plant. This report presents a design procedure for this purpose, based on frequency response concepts. The time-domain tolerances are translated into equivalent frequency response tolerances. The latter lead to bounds on the loop transmission function in the form of continuous curves on the Nichols chart. The properties of the loop transmission function which satisfy these bounds with minimum effect of sensor noise, are derived.

  17. Grafting polyethylenimine with quinoline derivatives for targeted imaging of intracellular Zn(2+) and logic gate operations.

    PubMed

    Pan, Yi; Shi, Yupeng; Chen, Junying; Wong, Chap-Mo; Zhang, Heng; Li, Mei-Jin; Li, Cheuk-Wing; Yi, Changqing

    2016-12-01

    In this study, a highly sensitive and selective fluorescent Zn(2+) probe which exhibited excellent biocompatibility, water solubility, and cell-membrane permeability, was facilely synthesized in a single step by grafting polyethyleneimine (PEI) with quinoline derivatives. The primary amino groups in the branched PEI can increase water solubility and cell permeability of the probe PEIQ, while quinoline derivatives can specifically recognize Zn(2+) and reduce the potential cytotoxicity of PEI. Basing on fluorescence off-on mechanism, PEIQ demonstrated excellent sensing capability towards Zn(2+) in absolute aqueous solution, where a high sensitivity with a detection limit as low as 38.1nM, and a high selectivity over competing metal ions and potential interfering amino acids, were achieved. Inspired by these results, elementary logic operations (YES, NOT and INHIBIT) have been constructed by employing PEIQ as the gate while Zn(2+) and EDTA as chemical inputs. Together with the low cytotoxicity and good cell-permeability, the practical application of PEIQ in living cell imaging was satisfactorily demonstrated, emphasizing its wide application in fundamental biology research. Copyright © 2016. Published by Elsevier B.V.

  18. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  19. Added clinical value of applying myocardial deformation imaging to assess right ventricular function.

    PubMed

    Sokalskis, Vladislavs; Peluso, Diletta; Jagodzinski, Annika; Sinning, Christoph

    2017-06-01

    Right heart dysfunction has been found to be a strong prognostic factor predicting adverse outcome in various cardiopulmonary diseases. Conventional echocardiographic measurements can be limited by geometrical assumptions and impaired reproducibility. Speckle tracking-derived strain provides a robust quantification of right ventricular function. It explicitly evaluates myocardial deformation, as opposed to tissue Doppler-derived strain, which is computed from tissue velocity gradients. Right ventricular longitudinal strain provides a sensitive tool for detecting right ventricular dysfunction, even at subclinical levels. Moreover, the longitudinal strain can be applied for prognostic stratification of patients with pulmonary hypertension, pulmonary embolism, and congestive heart failure. Speckle tracking-derived right atrial strain, right ventricular longitudinal strain-derived mechanical dyssynchrony, and three-dimensional echocardiography-derived strain are emerging imaging parameters and methods. Their application in research is paving the way for their clinical use. © 2017, Wiley Periodicals, Inc.

  20. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  1. Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.

    PubMed

    Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard

    2011-02-01

    The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.

  2. Application of Model Based Parameter Estimation for Fast Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy C. J.

    1998-01-01

    Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.

  3. Method and apparatus for calibrating a tiled display

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.

  4. GLACiAR: GaLAxy survey Completeness AlgoRithm

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniela; Trenti, Michele; Mutch, Simon; Oesch, Pascal

    2018-05-01

    GLACiAR (GaLAxy survey Completeness AlgoRithm) estimates the completeness and selection functions in galaxy surveys. Tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, the code can nevertheless be applied broadly. GLACiAR generates artificial galaxies that follow Sérsic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate.

  5. Measurements of striae in CR+ doped YAG laser crystals

    NASA Astrophysics Data System (ADS)

    Cady, Fredrick M.

    1994-12-01

    Striations in Czochralski (CZ) grown crystals have been observed in materials such as GaAs, silicon, photorefractive crystals used for data storage, potassium titanyl phosphate crystals and LiNbO3. Several techniques have been used for investigating these defects including electron microscopy, laser scanning tomography, selective photoetching, X-ray diffuse scattering, interference orthoscopy, laser interferometry and micro-Fourier transform infrared spectroscopy mapping. A 2mm thick sample of the material to be investigated is illuminated with light that is absorbed and non-absorbed by the ion concentration to be observed. The back surface of the sample is focused onto a solid-state image detector and images of the input beam and absorbed (and diffracted) beams are captured at two wavelengths. The variation of the coefficient of absorption asa function of distance on the sample can be derived from these measurements. A Big Sky Software Beamcode system is used to capture and display images. Software has been written to convert the Beamcode data files to a format that can be imported into a spreadsheet program such as Quatro Pro. The spreadsheet is then used to manipulate and display data. A model of the intensity map of the striae collected by the imaging system has been proposed and a data analysis procedure derived. From this, the variability of the attenuation coefficient alpha can be generated. Preliminary results show that alpha may vary by a factor of four or five over distances of 100 mu m. Potential errors and problems have been discovered and additional experiments and improvements to the experimental setup are in progress and we must now show that the measurement techniques and data analysis procedures provide 'real' information. Striae are clearly visible at all wavelengths including white light. Their basic spatial frequency does not change radically, at least when changing from blue to green to white light. Further experimental and theoretical work can be done to improve the data collection techniques and to verify the data analysis procedures.

  6. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  7. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  8. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  9. An imaging-based stochastic model for simulation of tumour vasculature

    NASA Astrophysics Data System (ADS)

    Adhikarla, Vikram; Jeraj, Robert

    2012-10-01

    A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies.

  10. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  11. TIA Software User's Manual

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Syed, Hazari I.

    1995-01-01

    This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.

  12. Description of the IV + V System Software Package.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management: An International Journal for Library and Information Services, 1984

    1984-01-01

    Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…

  13. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  14. Fractional cable model for signal conduction in spiny neuronal dendrites

    NASA Astrophysics Data System (ADS)

    Vitali, Silvia; Mainardi, Francesco

    2017-06-01

    The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.

  15. High Spatiotemporal Resolution Dynamic Contrast-Enhanced MR Enterography in Crohn Disease Terminal Ileitis Using Continuous Golden-Angle Radial Sampling, Compressed Sensing, and Parallel Imaging.

    PubMed

    Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh

    2015-06-01

    The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.

  16. Imaging light responses of foveal ganglion cells in the living macaque eye.

    PubMed

    Yin, Lu; Masella, Benjamin; Dalkara, Deniz; Zhang, Jie; Flannery, John G; Schaffer, David V; Williams, David R; Merigan, William H

    2014-05-07

    The fovea dominates primate vision, and its anatomy and perceptual abilities are well studied, but its physiology has been little explored because of limitations of current physiological methods. In this study, we adapted a novel in vivo imaging method, originally developed in mouse retina, to explore foveal physiology in the macaque, which permits the repeated imaging of the functional response of many retinal ganglion cells (RGCs) simultaneously. A genetically encoded calcium indicator, G-CaMP5, was inserted into foveal RGCs, followed by calcium imaging of the displacement of foveal RGCs from their receptive fields, and their intensity-response functions. The spatial offset of foveal RGCs from their cone inputs makes this method especially appropriate for fovea by permitting imaging of RGC responses without excessive light adaptation of cones. This new method will permit the tracking of visual development, progression of retinal disease, or therapeutic interventions, such as insertion of visual prostheses.

  17. Marine Mammal Habitat in Ecuador: Seasonal Abundance and Environmental Distribution

    DTIC Science & Technology

    2010-06-01

    derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial Undercurrent and Peru Current...is initiated by the subsurface derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial

  18. Performing Repeated Quantitative Small-Animal PET with an Arterial Input Function Is Routinely Feasible in Rats.

    PubMed

    Huang, Chi-Cheng; Wu, Chun-Hu; Huang, Ya-Yao; Tzen, Kai-Yuan; Chen, Szu-Fu; Tsai, Miao-Ling; Wu, Hsiao-Ming

    2017-04-01

    Performing quantitative small-animal PET with an arterial input function has been considered technically challenging. Here, we introduce a catheterization procedure that keeps a rat physiologically stable for 1.5 mo. We demonstrated the feasibility of quantitative small-animal 18 F-FDG PET in rats by performing it repeatedly to monitor the time course of variations in the cerebral metabolic rate of glucose (CMR glc ). Methods: Aseptic surgery was performed on 2 rats. Each rat underwent catheterization of the right femoral artery and left femoral vein. The catheters were sealed with microinjection ports and then implanted subcutaneously. Over the next 3 wk, each rat underwent 18 F-FDG quantitative small-animal PET 6 times. The CMR glc of each brain region was calculated using a 3-compartment model and an operational equation that included a k* 4 Results: On 6 mornings, we completed 12 18 F-FDG quantitative small-animal PET studies on 2 rats. The rats grew steadily before and after the 6 quantitative small-animal PET studies. The CMR glc of the conscious brain (e.g., right parietal region, 99.6 ± 10.2 μmol/100 g/min; n = 6) was comparable to that for 14 C-deoxyglucose autoradiographic methods. Conclusion: Maintaining good blood patency in catheterized rats is not difficult. Longitudinal quantitative small-animal PET imaging with an arterial input function can be performed routinely. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  19. Stylus/tablet user input device for MRI heart wall segmentation: efficiency and ease of use.

    PubMed

    Taslakian, Bedros; Pires, Antonio; Halpern, Dan; Babb, James S; Axel, Leon

    2018-05-02

    To determine whether use of a stylus user input device (UID) would be superior to a mouse for CMR segmentation. Twenty-five consecutive clinical cardiac magnetic resonance (CMR) examinations were selected. Image analysis was independently performed by four observers. Manual tracing of left (LV) and right (RV) ventricular endocardial contours was performed twice in 10 randomly assigned sessions, each session using only one UID. Segmentation time and the ventricular function variables were recorded. The mean segmentation time and time reduction were calculated for each method. Intraclass correlation coefficients (ICC) and Bland-Altman plots of function variables were used to assess intra- and interobserver variability and agreement between methods. Observers completed a Likert-type questionnaire. The mean segmentation time (in seconds) was significantly less with the stylus compared to the mouse, averaging 206±108 versus 308±125 (p<0.001) and 225±140 versus 353±162 (p<0.001) for LV and RV segmentation, respectively. The intra- and interobserver agreement rates were excellent (ICC≥0.75) regardless of the UID. There was an excellent agreement between measurements derived from manual segmentation using different UIDs (ICC≥0.75), with few exceptions. Observers preferred the stylus. The study shows a significant reduction in segmentation time using the stylus, a subjective preference, and excellent agreement between the methods. • Using a stylus for MRI ventricular segmentation is faster compared to mouse • A stylus is easier to use and results in less fatigue • There is excellent agreement between stylus and mouse UIDs.

  20. Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.

    2015-06-01

    In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.

  1. Design Of Feedforward Controllers For Multivariable Plants

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Controllers based on simple low-order transfer functions. Mathematical criteria derived for design of feedforward controllers for class of multiple-input/multiple-output linear plants. Represented by simple low-order transfer functions, obtained without reconstruction of states of commands and disturbances. Enables plant to track command while remaining unresponsive to disturbance in steady state. Feedback controller added independently to stabilize plant or to make control system less susceptible to variations in parameters of plant.

  2. A SAR Observation and Numerical Study on Ocean Surface Imprints of Atmospheric Vortex Streets.

    PubMed

    Li, Xiaofeng; Zheng, Weizhong; Zou, Cheng-Zhi; Pichel, William G

    2008-05-21

    The sea surface imprints of Atmospheric Vortex Street (AVS) off Aleutian Volcanic Islands, Alaska were observed in two RADARSAT-1 Synthetic Aperture Radar (SAR) images separated by about 11 hours. In both images, three pairs of distinctive vortices shedding in the lee side of two volcanic mountains can be clearly seen. The length and width of the vortex street are about 60-70 km and 20 km, respectively. Although the AVS's in the two SAR images have similar shapes, the structure of vortices within the AVS is highly asymmetrical. The sea surface wind speed is estimated from the SAR images with wind direction input from Navy NOGAPS model. In this paper we present a complete MM5 model simulation of the observed AVS. The surface wind simulated from the MM5 model is in good agreement with SAR-derived wind. The vortex shedding rate calculated from the model run is about 1 hour and 50 minutes. Other basic characteristics of the AVS including propagation speed of the vortex, Strouhal and Reynolds numbers favorable for AVS generation are also derived. The wind associated with AVS modifies the cloud structure in the marine atmospheric boundary layer. The AVS cloud pattern is also observed on a MODIS visible band image taken between the two RADARSAT SAR images. An ENVISAT advance SAR image taken 4 hours after the second RADARSAT SAR image shows that the AVS has almost vanished.

  3. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  4. Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram

    NASA Astrophysics Data System (ADS)

    Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.

    1987-01-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  5. Attention model of binocular rivalry

    PubMed Central

    Rankin, James; Rinzel, John; Carrasco, Marisa; Heeger, David J.

    2017-01-01

    When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. PMID:28696323

  6. Comparison of glomerular activity patterns by fMRI and wide-field calcium imaging: implications for principles underlying odor mapping

    PubMed Central

    Sanganahalli, Basavaraju G.; Rebello, Michelle R.; Herman, Peter; Papademetris, Xenophon; Shepherd, Gordon M.; Verhagen, Justus V.; Hyder, Fahmeed

    2015-01-01

    Functional imaging signals arise from distinct metabolic and hemodynamic events at the neuropil, but how these processes are influenced by pre- and post-synaptic activities need to be understood for quantitative interpretation of stimulus-evoked mapping data. The olfactory bulb (OB) glomeruli, spherical neuropil regions with well-defined neuronal circuitry, can provide insights into this issue. Optical calcium-sensitive fluorescent dye imaging (OICa2+) reflects dynamics of pre-synaptic input to glomeruli, whereas high-resolution functional magnetic resonance imaging (fMRI) using deoxyhemoglobin contrast reveals neuropil function within the glomerular layer where both pre- and post-synaptic activities contribute. We imaged odor-specific activity patterns of the dorsal OB in the same anesthetized rats with fMRI and OICa2+ and then co-registered the respective maps to compare patterns in the same space. Maps by each modality were very reproducible as trial-to-trial patterns for a given odor, overlapping by ~80%. Maps evoked by ethyl butyrate and methyl valerate for a given modality overlapped by ~80%, suggesting activation of similar dorsal glomerular networks by these odors. Comparison of maps generated by both methods for a given odor showed ~70% overlap, indicating similar odor-specific maps by each method. These results suggest that odor-specific glomerular patterns by high-resolution fMRI primarily tracks pre-synaptic input to the OB. Thus combining OICa2+ and fMRI lays the framework for studies of OB processing over a range of spatiotemporal scales, where OICa2+ can feature the fast dynamics of dorsal glomerular clusters and fMRI can map the entire glomerular sheet in the OB. PMID:26631819

  7. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. The Post-Processing Approach in the Finite Element Method. Part 1. Calculation of Displacements, Stresses, and other Higher Derivatives of the Displacements.

    DTIC Science & Technology

    1982-12-01

    Were the influence function (Green’s function) known for this point, then we could take i=O and 0 would be expressible in terms of the input data...alone. So (1.1) would take the form 4=R . Of course, the influence function is not in general available. At the other extreme, if we take to be the Dirac...where n is some integer, which, for the moment, will remain arbitrary. If we select for the influence function (Green’s function), then (2.5a) and

  9. Integration of prior knowledge into dense image matching for video surveillance

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  10. An ice-motion tracking system at the Alaska SAR facility

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross

    1990-01-01

    An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.

  11. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  12. Insights into Brown Adipose Tissue Physiology as Revealed by Imaging Studies

    PubMed Central

    Izzi-Engbeaya, Chioma; Salem, Victoria; Atkar, Rajveer S; Dhillo, Waljit S

    2014-01-01

    There has been resurgence in interest in brown adipose tissue (BAT) following radiological and histological identification of metabolically active BAT in adult humans. Imaging enables BAT to be studied non-invasively and therefore imaging studies have contributed a significant amount to what is known about BAT function in humans. In this review the current knowledge (derived from imaging studies) about the prevalence, function, activity and regulation of BAT in humans (as well as relevant rodent studies), will be summarized. PMID:26167397

  13. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  14. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  15. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  16. Face recognition via Gabor and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  17. Receptive Field Vectors of Genetically-Identified Retinal Ganglion Cells Reveal Cell-Type-Dependent Visual Functions

    PubMed Central

    Katz, Matthew L.; Viney, Tim J.; Nikolic, Konstantin

    2016-01-01

    Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types. PMID:26845435

  18. Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering

    PubMed Central

    Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung

    2014-01-01

    Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251

  19. Performance of real time associative memory using a photorefractive crystal and liquid crystal electrooptic switches

    NASA Astrophysics Data System (ADS)

    Xu, Haiying; Yuan, Yang; Yu, Youlong; Xu, Kebin; Xu, Yuhuan

    1990-08-01

    This paper presents a real time holographic associative memory implemented with photorefractive KNSBN:Co crystal as the memory element and a liquid crystal electrooptic switch array as the reflective thresholding device. The experiment stores and recalls two images and shows that the system has real-time multiple-image storage and recall functions. An associative memory with a dynamic threshold level to decide the closest match of an incomplete input is proposed.

  20. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  1. Neural Network Optimization of Ligament Stiffnesses for the Enhanced Predictive Ability of a Patient-Specific, Computational Foot/Ankle Model.

    PubMed

    Chande, Ruchi D; Wayne, Jennifer S

    2017-09-01

    Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.

  2. Beating the odds: The poisson distribution of all input cells during limiting dilution grossly underestimates whether a cell line is clonally-derived or not.

    PubMed

    Zhou, Yizhou; Shaw, David; Lam, Cynthia; Tsukuda, Joni; Yim, Mandy; Tang, Danming; Louie, Salina; Laird, Michael W; Snedecor, Brad; Misaghi, Shahram

    2017-09-23

    Establishing that a cell line was derived from a single cell progenitor and defined as clonally-derived for the production of clinical and commercial therapeutic protein drugs has been the subject of increased emphasis in cell line development (CLD). Several regulatory agencies have expressed that the prospective probability of clonality for CHO cell lines is assumed to follow the Poisson distribution based on the input cell count. The probability of obtaining monoclonal progenitors based on the Poisson distribution of all cells suggests that one round of limiting dilution may not be sufficient to assure the resulting cell lines are clonally-derived. We experimentally analyzed clonal derivatives originating from single cell cloning (SCC) via one round of limiting dilution, following our standard legacy cell line development practice. Two cell populations with stably integrated DNA spacers were mixed and subjected to SCC via limiting dilution. Cells were cultured in the presence of selection agent, screened, and ranked based on product titer. Post-SCC, the growing cell lines were screened by PCR analysis for the presence of identifying spacers. We observed that the percentage of nonclonal populations was below 9%, which is considerably lower than the determined probability based on the Poisson distribution of all cells. These results were further confirmed using fluorescence imaging of clonal derivatives originating from SCC via limiting dilution of mixed cell populations expressing GFP or RFP. Our results demonstrate that in the presence of selection agent, the Poisson distribution of all cells clearly underestimates the probability of obtaining clonally-derived cell lines. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 2017. © 2017 American Institute of Chemical Engineers.

  3. Registration-based assessment of regional lung function via volumetric CT images of normal subjects vs. severe asthmatics

    PubMed Central

    Choi, Sanghun; Hoffman, Eric A.; Wenzel, Sally E.; Tawhai, Merryn H.; Yin, Youbing; Castro, Mario

    2013-01-01

    The purpose of this work was to explore the use of image registration-derived variables associated with computed tomographic (CT) imaging of the lung acquired at multiple volumes. As an evaluation of the utility of such an imaging approach, we explored two groups at the extremes of population ranging from normal subjects to severe asthmatics. A mass-preserving image registration technique was employed to match CT images at total lung capacity (TLC) and functional residual capacity (FRC) for assessment of regional air volume change and lung deformation between the two states. Fourteen normal subjects and thirty severe asthmatics were analyzed via image registration-derived metrics together with their pulmonary function test (PFT) and CT-based air-trapping. Relative to the normal group, the severely asthmatic group demonstrated reduced air volume change (consistent with air trapping) and more isotropic deformation in the basal lung regions while demonstrating increased air volume change associated with increased anisotropic deformation in the apical lung regions. These differences were found despite the fact that both PFT-derived TLC and FRC in the two groups were nearly 100% of predicted values. Data suggest that reduced basal-lung air volume change in severe asthmatics was compensated by increased apical-lung air volume change and that relative increase in apical-lung air volume change in severe asthmatics was accompanied by enhanced anisotropic deformation. These data suggest that CT-based deformation, assessed via inspiration vs. expiration scans, provides a tool for distinguishing differences in lung mechanics when applied to the extreme ends of a population range. PMID:23743399

  4. Soil C dynamics under intensive oil palm plantations in poor tropical soils

    NASA Astrophysics Data System (ADS)

    Guillaume, Thomas; Ruegg, Johanna; Quezada, Juan Carlos; Buttler, Alexandre

    2017-04-01

    Oil palm cultivation mainly takes place on heavily-weathered tropical soils where nutrients are limiting factors for plant growth and microbial activity. Intensive fertilization and changes of C input by oil palms strongly affects soil C and nutrient dynamics, challenging long-term soil fertility. Oil palm plantations management offers unique opportunities to study soil C and nutrients interactions in field conditions because 1) they can be considered as long-term litter manipulation experiments since all aboveground C inputs are concentrated in frond pile areas and 2) mineral fertilizers are only applied in specific areas, i.e. weeded circle around the tree and interrows, but not in harvest paths. Here, we determined impacts of mineral fertilizer and organic matter input on soil organic carbon dynamics and microbial activity in mature oil palm plantation established on savanna grasslands. Rates of savanna-derived soil organic carbon (SOC) decomposition and oil palm-derived SOC net stabilization were determined using changes in isotopic signature of in C input following a shift from C4 (savanna) to C3 (oil palm) vegetation. Application of mineral fertilizer alone did not affect savanna-derived SOC decomposition or oil palm-derived SOC stabilization rates, but fertilization associated with higher C input lead to an increase of oil palm-derived SOC stabilization rates, with about 50% of topsoil SOC derived from oil palm after 9 years. High carbon and nutrients inputs did not increase microbial biomass but microorganisms were more active per unit of biomass and SOC. In conclusion, soil organic matter decomposition was limited by C rather than nutrients in the studied heavily-weathered soils. Fresh C and nutrient inputs did not lead to priming of old savanna-derived SOC but increased turnover and stabilization of new oil palm-derived SOC.

  5. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  6. Homeostasis, singularities, and networks.

    PubMed

    Golubitsky, Martin; Stewart, Ian

    2017-01-01

    Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.

  7. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  8. The behavior of quantization spectra as a function of signal-to-noise ratio

    NASA Technical Reports Server (NTRS)

    Flanagan, M. J.

    1991-01-01

    An expression for the spectrum of quantization error in a discrete-time system whose input is a sinusoid plus white Gaussian noise is derived. This quantization spectrum consists of two components: a white-noise floor and spurious harmonics. The dithering effect of the input Gaussian noise in both components of the spectrum is considered. Quantitative results in a discrete Fourier transform (DFT) example show the behavior of spurious harmonics as a function of the signal-to-noise ratio (SNR). These results have strong implications for digital reception and signal analysis systems. At low SNRs, spurious harmonics decay exponentially on a log-log scale, and the resulting spectrum is white. As the SNR increases, the spurious harmonics figure prominently in the output spectrum. A useful expression is given that roughly bounds the magnitude of a spurious harmonic as a function of the SNR.

  9. Target dependence of orientation and direction selectivity of corticocortical projection neurons in the mouse V1

    PubMed Central

    Matsui, Teppei; Ohki, Kenichi

    2013-01-01

    Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987

  10. Experimental Optoelectronic Associative Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.

  11. Material appearance acquisition from a single image

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Cui, Shulin; Cui, Hanwen; Yang, Lin; Wu, Tao

    2017-01-01

    The scope of this paper is to present a method of material appearance acquisition(MAA) from a single image. In this paper, material appearance is represented by spatially varying bidirectional reflectance distribution function(SVBRDF). Therefore, MAA can be reduced to the problem of recovery of each pixel's BRDF parameters from an original input image, which include diffuse coefficient, specular coefficient, normal and glossiness based on the Blinn-Phone model. In our method, the workflow of MAA includes five main phases: highlight removal, estimation of intrinsic images, shape from shading(SFS), initialization of glossiness and refining SVBRDF parameters based on IPOPT. The results indicate that the proposed technique can effectively extract the material appearance from a single image.

  12. The prospects of Jerusalem artichoke in functional food ingredients and bioenergy production.

    PubMed

    Yang, Linxi; He, Quan Sophia; Corscadden, Kenneth; Udenigwe, Chibuike C

    2015-03-01

    Jerusalem artichoke, a native plant to North America has recently been recognized as a promising biomass for bioeconomy development, with a number of advantages over conventional crops such as low input cultivation, high crop yield, wide adaptation to climatic and soil conditions and strong resistance to pests and plant diseases. A variety of bioproducts can be derived from Jerusalem artichoke, including inulin, fructose, natural fungicides, antioxidant and bioethanol. This paper provides an overview of the cultivation of Jerusalem artichoke, derivation of bioproducts and applicable production technologies, with an expectation to draw more attention on this valuable crop for its applications as biofuel, functional food and bioactive ingredient sources.

  13. Image Segmentation for Improvised Explosive Devices

    DTIC Science & Technology

    2012-12-01

    us to generate color models for IEDs without user input that labels parts of the IED. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents 1...has to be generated. All graph cut algorithms we analyze define the undirected network G( V ,E) as a set of nodes V , edges E, and capacities C: E → R. 3...algorithms we study, this objective function is the sum of the two functions U and V , where the function U is a region property which evaluates the

  14. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  15. Characterizing the Siple Coast Ice Stream System using Satellite Images, Improved Topography, and Integrated Aerogeophysical Measurements

    NASA Technical Reports Server (NTRS)

    Scambos, Ted

    2003-01-01

    A technique for improving elevation maps of the polar ice sheets has been developed using AVHRR images. The technique is based on 'photoclinometry' or 'shape from shading', a technique used in the past for mapping planetary surfaces where little elevation information was available. The fundamental idea behind photoclinometry is using the brightness of imaged areas to infer their surface slope in the sun-illuminated direction. Our version of the method relies on a calibration of the images based on an existing lower-resolution digital elevation model (DEM), and then using the images to improve the input DEM resolution to the scale of the image data. Most current DEMs covering the ice sheets are based on Radar altimetry data, and have an inherent resolution of 10 to 25 km at best - although the grid scale of the DEM is often finer. These DEMs are highly accurate (to less than 1 meter); but they report the mean elevation of a broad area, thus erasing smaller features of glaciological interest. AVHRR image data, when accurately geolocated and calibrated, provides surface slope measurements (based on the pixel brightness under known lighting conditions) every approximately 1.1 km. The limitations of the technique are noisiness in the image data, small variations in the albedo of the snow surface, and the integration technique used to create an elevation field from the image-derived slopes. Our study applied the technique to several ice sheet areas having some elevation data; Greenland, the Amery Ice Shelf, the Institute Ice Stream, and the Siple Coast. For the latter, the input data set was laser-altimetry data collected under NSF's SOAR Facility (Support Office for Aerogeophysical Research) over the onset area of the Siple Coast. Over the course of the grant, the technique was greatly improved and modified, significantly improving accuracy and reducing noise from the images. Several publications resulted from the work, and a follow-on proposal to NASA has been submitted to apply the same method to MODIS data using ICESat and other elevation input information. This follow-on grant will explore two applications that are facilitated by the improved surface morphology characterizations of the ice sheets: accumulation and temperature variations near small undulations in the ice.

  16. Optical resonance imaging: An optical analog to MRI with sub-diffraction-limited capabilities.

    PubMed

    Allodi, Marco A; Dahlberg, Peter D; Mazuski, Richard J; Davis, Hunter C; Otto, John P; Engel, Gregory S

    2016-12-21

    We propose here optical resonance imaging (ORI), a direct optical analog to magnetic resonance imaging (MRI). The proposed pulse sequence for ORI maps space to time and recovers an image from a heterodyne-detected third-order nonlinear photon echo measurement. As opposed to traditional photon echo measurements, the third pulse in the ORI pulse sequence has significant pulse-front tilt that acts as a temporal gradient. This gradient couples space to time by stimulating the emission of a photon echo signal from different lateral spatial locations of a sample at different times, providing a widefield ultrafast microscopy. We circumvent the diffraction limit of the optics by mapping the lateral spatial coordinate of the sample with the emission time of the signal, which can be measured to high precision using interferometric heterodyne detection. This technique is thus an optical analog of MRI, where magnetic-field gradients are used to localize the spin-echo emission to a point below the diffraction limit of the radio-frequency wave used. We calculate the expected ORI signal using 15 fs pulses and 87° of pulse-front tilt, collected using f /2 optics and find a two-point resolution 275 nm using 800 nm light that satisfies the Rayleigh criterion. We also derive a general equation for resolution in optical resonance imaging that indicates that there is a possibility of superresolution imaging using this technique. The photon echo sequence also enables spectroscopic determination of the input and output energy. The technique thus correlates the input energy with the final position and energy of the exciton.

  17. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  18. Cortical connective field estimates from resting state fMRI activity.

    PubMed

    Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W

    2014-01-01

    One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.

  19. Enhanced Imaging of Building Interior for Portable MIMO Through-the-wall Radar

    NASA Astrophysics Data System (ADS)

    Song, Yongping; Zhu, Jiahua; Hu, Jun; Jin, Tian; Zhou, Zhimin

    2018-01-01

    Portable multi-input multi-output (MIMO) radar system is able to imaging the building interior through aperture synthesis. However, significant grating lobes are invoked in the directly imaging results, which may deteriorate the imaging quality of other targets and influence the detail information extraction of imaging scene. In this paper, a two-stage coherence factor (CF) weighting method is proposed to enhance the imaging quality. After obtaining the sub-imaging results of each spatial sampling position using conventional CF approach, a window function is employed to calculate the proposed “enhanced CF” adaptive to the spatial variety effect behind the wall for the combination of these sub-images. The real data experiment illustrates the better performance of proposed method on grating lobes suppression and imaging quality enhancement compare to the traditional radar imaging approach.

  20. A multiscale Markov random field model in wavelet domain for image segmentation

    NASA Astrophysics Data System (ADS)

    Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan

    2017-07-01

    The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.

  1. Multilayer modal actuator-based piezoelectric transformers.

    PubMed

    Huang, Yao-Tien; Wu, Wen-Jong; Wang, Yen-Chieh; Lee, Chih-Kung

    2007-02-01

    An innovative, multilayer piezoelectric transformer equipped with a full modal filtering input electrode is reported herein. This modal-shaped electrode, based on the orthogonal property of structural vibration modes, is characterized by full modal filtering to ensure that only the desired vibration mode is excited during operation. The newly developed piezoelectric transformer is comprised of three layers: a multilayered input layer, an insulation layer, and a single output layer. The electrode shape of the input layer is derived from its structural vibration modal shape, which takes advantage of the orthogonal property of the vibration modes to achieve a full modal filtering effect. The insulation layer possesses two functions: first, to couple the mechanical vibration energy between the input and output, and second, to provide electrical insulation between the two layers. To meet the two functions, a low temperature, co-fired ceramic (LTCC) was used to provide the high mechanical rigidity and high electrical insulation. It can be shown that this newly developed piezoelectric transformer has the advantage of possessing a more efficient energy transfer and a wider optimal working frequency range when compared to traditional piezoelectric transformers. A multilayer piezoelectric, transformer-based inverter applicable for use in LCD monitors or portable displays is presented as well.

  2. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less

  3. Strategies for the generation of parametric images of [11C]PIB with plasma input functions considering discriminations and reproducibility.

    PubMed

    Edison, Paul; Brooks, David J; Turkheimer, Federico E; Archer, Hilary A; Hinz, Rainer

    2009-11-01

    Pittsburgh compound B or [11C]PIB is an amyloid imaging agent which shows a clear differentiation between subjects with Alzheimer's disease (AD) and controls. However the observed signal difference in other forms of dementia such as dementia with Lewy bodies (DLB) is smaller, and mild cognitively impaired (MCI) subjects and some healthy elderly normals may show intermediate levels of [11C]PIB binding. The cerebellum, a commonly used reference region for non-specific tracer uptake in [11C]PIB studies in AD may not be valid in Prion disorders or monogenic forms of AD. The aim of this work was to: 1-compare methods for generating parametric maps of [11C]PIB retention in tissue using a plasma input function in respect of their ability to discriminate between AD subjects and controls and 2-estimate the test-retest reproducibility in AD subjects. 12 AD subjects (5 of which underwent a repeat scan within 6 weeks) and 10 control subjects had 90 minute [11C]PIB dynamic PET scans, and arterial plasma input functions were measured. Parametric maps were generated with graphical analysis of reversible binding (Logan plot), irreversible binding (Patlak plot), and spectral analysis. Between group differentiation was calculated using Student's t-test and comparisons between different methods were made using p values. Reproducibility was assessed by intraclass correlation coefficients (ICC). We found that the 75 min value of the impulse response function showed the best group differentiation and had a higher ICC than volume of distribution maps generated from Logan and spectral analysis. Patlak analysis of [11C]PIB binding was the least reproducible.

  4. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    PubMed

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  5. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  6. Forest above ground biomass estimation and forest/non-forest classification for Odisha, India, using L-band Synthetic Aperture Radar (SAR) data

    NASA Astrophysics Data System (ADS)

    Suresh, M.; Kiran Chand, T. R.; Fararoda, R.; Jha, C. S.; Dadhwal, V. K.

    2014-11-01

    Tropical forests contribute to approximately 40 % of the total carbon found in terrestrial biomass. In this context, forest/non-forest classification and estimation of forest above ground biomass over tropical regions are very important and relevant in understanding the contribution of tropical forests in global biogeochemical cycles, especially in terms of carbon pools and fluxes. Information on the spatio-temporal biomass distribution acts as a key input to Reducing Emissions from Deforestation and forest Degradation Plus (REDD+) action plans. This necessitates precise and reliable methods to estimate forest biomass and to reduce uncertainties in existing biomass quantification scenarios. The use of backscatter information from a host of allweather capable Synthetic Aperture Radar (SAR) systems during the recent past has demonstrated the potential of SAR data in forest above ground biomass estimation and forest / nonforest classification. In the present study, Advanced Land Observing Satellite (ALOS) / Phased Array L-band Synthetic Aperture Radar (PALSAR) data along with field inventory data have been used in forest above ground biomass estimation and forest / non-forest classification over Odisha state, India. The ALOSPALSAR 50 m spatial resolution orthorectified and radiometrically corrected HH/HV dual polarization data (digital numbers) for the year 2010 were converted to backscattering coefficient images (Schimada et al., 2009). The tree level measurements collected during field inventory (2009-'10) on Girth at Breast Height (GBH at 1.3 m above ground) and height of all individual trees at plot (plot size 0.1 ha) level were converted to biomass density using species specific allometric equations and wood densities. The field inventory based biomass estimations were empirically integrated with ALOS-PALSAR backscatter coefficients to derive spatial forest above ground biomass estimates for the study area. Further, The Support Vector Machines (SVM) based Radial Basis Function classification technique was employed to carry out binary (forest-non forest) classification using ALOSPALSAR HH and HV backscatter coefficient images and field inventory data. The textural Haralick's Grey Level Cooccurrence Matrix (GLCM) texture measures are determined on HV backscatter image for Odisha, for the year 2010. PALSAR HH, HV backscatter coefficient images, their difference (HHHV) and HV backscatter coefficient based eight textural parameters (Mean, Variance, Dissimilarity, Contrast, Angular second moment, Homogeneity, Correlation and Contrast) are used as input parameters for Support Vector Machines (SVM) tool. Ground based inputs for forest / non-forest were taken from field inventory data and high resolution Google maps. Results suggested significant relationship between HV backscatter coefficient and field based biomass (R2 = 0.508, p = 0.55) compared to HH with biomass values ranging from 5 to 365 t/ha. The spatial variability of biomass with reference to different forest types is in good agreement. The forest / nonforest classified map suggested a total forest cover of 50214 km2 with an overall accuracy of 92.54 %. The forest / non-forest information derived from the present study showed a good spatial agreement with the standard forest cover map of Forest Survey of India (FSI) and corresponding published area of 50575 km2. Results are discussed in the paper.

  7. The 3D modeling of high numerical aperture imaging in thin films

    NASA Technical Reports Server (NTRS)

    Flagello, D. G.; Milster, Tom

    1992-01-01

    A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.

  8. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  9. Two Unipolar Terminal-Attractor-Based Associative Memories

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Wu, Chwan-Hwa

    1995-01-01

    Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).

  10. Network compensation for missing sensors

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1991-01-01

    A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.

  11. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  12. Design of a reading test for low-vision image warping

    NASA Astrophysics Data System (ADS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane

    1993-08-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  13. Design of a reading test for low vision image warping

    NASA Technical Reports Server (NTRS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.

    1993-01-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  14. Effect of basal forebrain stimulation on extracellular acetylcholine release and blood flow in the olfactory bulb.

    PubMed

    Uchida, Sae; Kagitani, Fusako

    2017-05-12

    The olfactory bulb receives cholinergic basal forebrain input, as does the neocortex; however, the in vivo physiological functions regarding the release of extracellular acetylcholine and regulation of regional blood flow in the olfactory bulb are unclear. We used in vivo microdialysis to measure the extracellular acetylcholine levels in the olfactory bulb of urethane-anesthetized rats. Focal chemical stimulation by microinjection of L-glutamate into the horizontal limb of the diagonal band of Broca (HDB) in the basal forebrain, which is the main source of cholinergic input to the olfactory bulb, increased extracellular acetylcholine release in the ipsilateral olfactory bulb. When the regional cerebral blood flow was measured using laser speckle contrast imaging, the focal chemical stimulation of the HDB did not significantly alter the blood flow in the olfactory bulb, while increases were observed in the neocortex. Our results suggest a functional difference between the olfactory bulb and neocortex regarding cerebral blood flow regulation through the release of acetylcholine by cholinergic basal forebrain input.

  15. Image registration with auto-mapped control volumes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction,more » in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of inhale and exhale phases of a lung 4D CT. Algorithm convergence was confirmed by starting the registration calculations from a large number of initial transformation parameters. An accuracy of {approx}2 mm was achieved for both deformable and rigid registration. The proposed image registration method greatly reduces the complexity involved in the determination of homologous control points and allows us to minimize the subjectivity and uncertainty associated with the current manual interactive approach. Patient studies have indicated that the two-step registration technique is fast, reliable, and provides a valuable tool to facilitate both rigid and nonrigid image registrations.« less

  16. Comparison of CT-derived Ventilation Maps with Deposition Patterns of Inhaled Microspheres in Rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Rick E.; Lamm, W. J.; Einstein, Daniel R.

    2015-04-01

    Purpose: Computer models for inhalation toxicology and drug-aerosol delivery studies rely on ventilation pattern inputs for predictions of particle deposition and vapor uptake. However, changes in lung mechanics due to disease can impact airflow dynamics and model results. It has been demonstrated that non-invasive, in vivo, 4DCT imaging (3D imaging at multiple time points in the breathing cycle) can be used to map heterogeneities in ventilation patterns under healthy and disease conditions. The purpose of this study was to validate ventilation patterns measured from CT imaging by exposing the same rats to an aerosol of fluorescent microspheres (FMS) and examiningmore » particle deposition patterns using cryomicrotome imaging. Materials and Methods: Six male Sprague-Dawley rats were intratracheally instilled with elastase to a single lobe to induce a heterogeneous disease. After four weeks, rats were imaged over the breathing cycle by CT then immediately exposed to an aerosol of ~1µm FMS for ~5 minutes. After the exposure, the lungs were excised and prepared for cryomicrotome imaging, where a 3D image of FMS deposition was acquired using serial sectioning. Cryomicrotome images were spatially registered to match the live CT images to facilitate direct quantitative comparisons of FMS signal intensity with the CT-based ventilation maps. Results: Comparisons of fractional ventilation in contiguous, non-overlapping, 3D regions between CT-based ventilation maps and FMS images showed strong correlations in fractional ventilation (r=0.888, p<0.0001). Conclusion: We conclude that ventilation maps derived from CT imaging are predictive of the 1µm aerosol deposition used in ventilation-perfusion heterogeneity inhalation studies.« less

  17. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure

  18. Assessing stream bank condition using airborne LiDAR and high spatial resolution image data in temperate semirural areas in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Johansen, Kasper; Grove, James; Denham, Robert; Phinn, Stuart

    2013-01-01

    Stream bank condition is an important physical form indicator for streams related to the environmental condition of riparian corridors. This research developed and applied an approach for mapping bank condition from airborne light detection and ranging (LiDAR) and high-spatial resolution optical image data in a temperate forest/woodland/urban environment. Field observations of bank condition were related to LiDAR and optical image-derived variables, including bank slope, plant projective cover, bank-full width, valley confinement, bank height, bank top crenulation, and ground vegetation cover. Image-based variables, showing correlation with the field measurements of stream bank condition, were used as input to a cumulative logistic regression model to estimate and map bank condition. The highest correlation was achieved between field-assessed bank condition and image-derived average bank slope (R2=0.60, n=41), ground vegetation cover (R=0.43, n=41), bank width/height ratio (R=0.41, n=41), and valley confinement (producer's accuracy=100%, n=9). Cross-validation showed an average misclassification error of 0.95 from an ordinal scale from 0 to 4 using the developed model. This approach was developed to support the remotely sensed mapping of stream bank condition for 26,000 km of streams in Victoria, Australia, from 2010 to 2012.

  19. Wavelength feature mapping as a proxy to mineral chemistry for investigating geologic systems: An example from the Rodalquilar epithermal system

    NASA Astrophysics Data System (ADS)

    van der Meer, Freek; Kopačková, Veronika; Koucká, Lucie; van der Werff, Harald M. A.; van Ruitenbeek, Frank J. A.; Bakker, Wim H.

    2018-02-01

    The final product of a geologic remote sensing data analysis using multi spectral and hyperspectral images is a mineral (abundance) map. Multispectral data, such as ASTER, Landsat, SPOT, Sentinel-2, typically allow to determine qualitative estimates of what minerals are in a pixel, while hyperspectral data allow to quantify this. As input to most image classification or spectral processing approach, endmembers are required. An alternative approach to classification is to derive absorption feature characteristics such as the wavelength position of the deepest absorption, depth of the absorption and symmetry of the absorption feature from hyperspectral data. Two approaches are presented, tested and compared in this paper: the 'Wavelength Mapper' and the 'QuanTools'. Although these algorithms use a different mathematical solution to derive absorption feature wavelength and depth, and use different image post-processing, the results are consistent, comparable and reproducible. The wavelength images can be directly linked to mineral type and abundance, but more importantly also to mineral chemical composition and subtle changes thereof. This in turn allows to interpret hyperspectral data in terms of mineral chemistry changes which is a proxy to pressure-temperature of formation of minerals. We show the case of the Rodalquilar epithermal system of the southern Spanish Gabo de Gata volcanic area using HyMAP airborne hyperspectral images.

  20. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Application of a rat hindlimb model: a prediction of force spaces reachable through stimulation of nerve fascicles.

    PubMed

    Johnson, Will L; Jindrich, Devin L; Zhong, Hui; Roy, Roland R; Edgerton, V Reggie

    2011-12-01

    A device to generate standing or locomotion through chronically placed electrodes has not been fully developed due in part to limitations of clinical experimentation and the high number of muscle activation inputs of the leg. We investigated the feasibility of functional electrical stimulation paradigms that minimize the input dimensions for controlling the limbs by stimulating at nerve fascicles, utilizing a model of the rat hindlimb, which combined previously collected morphological data with muscle physiological parameters presented herein. As validation of the model, we investigated the suitability of a lumped-parameter model for the prediction of muscle activation during dynamic tasks. Using the validated model, we found that the space of forces producible through activation of muscle groups sharing common nerve fascicles was nonlinearly dependent on the number of discrete muscle groups that could be individually activated (equivalently, the neuroanatomical level of activation). Seven commonly innervated muscle groups were sufficient to produce 78% of the force space producible through individual activation of the 42 modeled hindlimb muscles. This novel, neuroanatomically derived reduction in input dimension emphasizes the potential to simplify controllers for functional electrical stimulation to improve functional recovery after a neuromuscular injury.

  2. Application of a Rat Hindlimb Model: A Prediction of Force Spaces Reachable Through Stimulation of Nerve Fascicles

    PubMed Central

    Johnson, Will L.; Jindrich, Devin L.; Zhong, Hui; Roy, Roland R.

    2011-01-01

    A device to generate standing or locomotion through chronically placed electrodes has not been fully developed due in part to limitations of clinical experimentation and the high number of muscle activation inputs of the leg. We investigated the feasibility of functional electrical stimulation paradigms that minimize the input dimensions for controlling the limbs by stimulating at nerve fascicles, utilizing a model of the rat hindlimb which combined previously collected morphological data with muscle physiological parameters presented herein. As validation of the model we investigated the suitability of a lumped-parameter model for prediction of muscle activation during dynamic tasks. Using the validated model we found that the space of forces producible through activation of muscle groups sharing common nerve fascicles was nonlinearly dependent on the number of discrete muscle groups that could be individually activated (equivalently, the neuroanatomical level of activation). Seven commonly innervated muscle groups were sufficient to produce 78% of the force space producible through individual activation of the 42 modeled hindlimb muscles. This novel, neuroanatomically derived reduction in input dimension emphasizes the potential to simplify controllers for functional electrical stimulation to improve functional recovery after a neuromuscular injury. PMID:21244999

  3. Radar scattering functions using Itokawa as ground truth

    NASA Astrophysics Data System (ADS)

    Nolan, M.; Bramson, A.; Magri, C.

    2014-07-01

    Determining shape models from radar and lightcurve data is an inverse problem that involves computing the expected radar image that would result from a given shape and viewing geometry. The original work of Hudson [1] used models of radar scattering derived from observations of the terrestrial planets. Hudson verified his results using a laboratory simulation of delay-Doppler imaging. Here we compare radar data to synthetic data using the Hayabusa-derived shape model of Itokawa [2] to model Arecibo and Goldstone radar images [3,4]. The synthetic images match the observations well (see figure), but sometimes have bright pixels on the leading edge (top) of the data that are not seen in the synthetic images. We model the scattering dependence on incidence angle as a function tabulated every 0.1 degrees of incidence angle. The resulting fit is a good match to a cos^n θ distribution, but with a strong spike near (but not exactly at) zero incidence. We are studying the details of the low-angle scattering.

  4. Optoelectronic associative memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1993-01-01

    An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.

  5. Assessing the Utility of Uav-Borne Hyperspectral Image and Photogrammetry Derived 3d Data for Wetland Species Distribution Quick Mapping

    NASA Astrophysics Data System (ADS)

    Li, Q. S.; Wong, F. K. K.; Fung, T.

    2017-08-01

    Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.

  6. Integration of low level and ontology derived features for automatic weapon recognition and identification

    NASA Astrophysics Data System (ADS)

    Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore

    2011-06-01

    This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.

  7. Estimating spatially distributed soil texture using time series of thermal remote sensing - a case study in central Europe

    NASA Astrophysics Data System (ADS)

    Müller, Benjamin; Bernhardt, Matthias; Jackisch, Conrad; Schulz, Karsten

    2016-09-01

    For understanding water and solute transport processes, knowledge about the respective hydraulic properties is necessary. Commonly, hydraulic parameters are estimated via pedo-transfer functions using soil texture data to avoid cost-intensive measurements of hydraulic parameters in the laboratory. Therefore, current soil texture information is only available at a coarse spatial resolution of 250 to 1000 m. Here, a method is presented to derive high-resolution (15 m) spatial topsoil texture patterns for the meso-scale Attert catchment (Luxembourg, 288 km2) from 28 images of ASTER (advanced spaceborne thermal emission and reflection radiometer) thermal remote sensing. A principle component analysis of the images reveals the most dominant thermal patterns (principle components, PCs) that are related to 212 fractional soil texture samples. Within a multiple linear regression framework, distributed soil texture information is estimated and related uncertainties are assessed. An overall root mean squared error (RMSE) of 12.7 percentage points (pp) lies well within and even below the range of recent studies on soil texture estimation, while requiring sparser sample setups and a less diverse set of basic spatial input. This approach will improve the generation of spatially distributed topsoil maps, particularly for hydrologic modeling purposes, and will expand the usage of thermal remote sensing products.

  8. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  9. Integration of ALOS/PALSAR backscatter with a LiDAR-derived canopy height map to quantify forest fragmentation

    NASA Astrophysics Data System (ADS)

    Pinto, N.; Dubayah, R.; Simard, M.; Fatoyinbo, T. E.

    2011-12-01

    Habitat loss is the main predictor of species extinctions and must be characterized in high-biodiversity ecosystems where land cover change is pervasive. Forests' ability to support viable animal populations is typically modeled as a function of the presence of linkages or corridors, and quantified with fragmentation metrics. In this scenario, small forest patches and linear (e.g. riparian) zones can act as keystone structures. Fine-resolution, all-weather Synthetic Aperture Radar (SAR) data from ALOS/PALSAR is well-suited to resolve forest fragments in tropical sites. This study summarizes a technique for integrating fragmentation metrics from ALOS/PALSAR with vertical structure data from ICESat/GLAS to produce fine-resolution (30 m) forest habitat metrics that capture both local quality (canopy height) as well as spatial context and multi-scale connectivity. We illustrate our approach with backscatter images acquired over the Brazilian Atlantic Forest, a biodiversity hotspot. ALOS/PALSAR 1.1 images acquired over the dry season were calibrated to calculate gamma naught and map forest cover via tresholding. We employ network algorithms to locate dispersal bottlenecks between conservation units. The location of keystone structures is compared against a model that uses coarse (500m) percent tree cover as an input.

  10. Regulation of Brain-Derived Neurotrophic Factor Exocytosis and Gamma-Aminobutyric Acidergic Interneuron Synapse by the Schizophrenia Susceptibility Gene Dysbindin-1.

    PubMed

    Yuan, Qiang; Yang, Feng; Xiao, Yixin; Tan, Shawn; Husain, Nilofer; Ren, Ming; Hu, Zhonghua; Martinowich, Keri; Ng, Julia S; Kim, Paul J; Han, Weiping; Nagata, Koh-Ichi; Weinberger, Daniel R; Je, H Shawn

    2016-08-15

    Genetic variations in dystrobrevin binding protein 1 (DTNBP1 or dysbindin-1) have been implicated as risk factors in the pathogenesis of schizophrenia. The encoded protein dysbindin-1 functions in the regulation of synaptic activity and synapse development. Intriguingly, a loss of function mutation in Dtnbp1 in mice disrupted both glutamatergic and gamma-aminobutyric acidergic transmission in the cerebral cortex; pyramidal neurons displayed enhanced excitability due to reductions in inhibitory synaptic inputs. However, the mechanism by which reduced dysbindin-1 activity causes inhibitory synaptic deficits remains unknown. We investigated the role of dysbindin-1 in the exocytosis of brain-derived neurotrophic factor (BDNF) from cortical excitatory neurons, organotypic brain slices, and acute slices from dysbindin-1 mutant mice and determined how this change in BDNF exocytosis transsynaptically affected the number of inhibitory synapses formed on excitatory neurons via whole-cell recordings, immunohistochemistry, and live-cell imaging using total internal reflection fluorescence microscopy. A decrease in dysbindin-1 reduces the exocytosis of BDNF from cortical excitatory neurons, and this reduction in BDNF exocytosis transsynaptically resulted in reduced inhibitory synapse numbers formed on excitatory neurons. Furthermore, application of exogenous BDNF rescued the inhibitory synaptic deficits caused by the reduced dysbindin-1 level in both cultured cortical neurons and slice cultures. Taken together, our results demonstrate that these two genes linked to risk for schizophrenia (BDNF and dysbindin-1) function together to regulate interneuron development and cortical network activity. This evidence supports the investigation of the association between dysbindin-1 and BDNF in humans with schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Towards automatic lithological classification from remote sensing data using support vector machines

    NASA Astrophysics Data System (ADS)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14 principal component bands, 14 independent component bands, 3 band ratios, 3 DEM derivatives: slope/curvatureroughness and 2 aeromagnetic derivatives: mean and variance of susceptibility) extracted from the ASTER, DEM and aeromagnetic data, in order to determine the optimal inputs that provide the highest classification accuracy. It was found that a combination of ASTER-derived independent components, principal components and band ratios, DEM-derived slope, curvature and roughness, and aeromagnetic-derived mean and variance of magnetic susceptibility provide the highest classification accuracy of 93.4% on independent test samples. A comparison of the classification results of the SVM with those of maximum likelihood (84.9%) and minimum distance (38.4%) classifiers clearly show that the SVM algorithm returns much higher classification accuracy. Therefore, the SVM method can be used to produce quick and reliable geological maps from scarce geological information, which is still the case with many under-developed frontier regions of the world.

  12. Improving the accuracy in detection of clustered microcalcifications with a context-sensitive classification model.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2016-01-01

    In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.

  13. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  14. Junctional and nonjunctional effects of heptanol and glycyrrhetinic acid derivates in rat mesenteric small arteries

    PubMed Central

    Matchkov, Vladimir V; Rahman, Awahan; Peng, Hongli; Nilsson, Holger; Aalkjær, Christian

    2004-01-01

    Heptanol, 18α-glycyrrhetinic acid (18αGA) and 18β-glycyrrhetinic acid (18βGA) are known blockers of gap junctions, and are often used in vascular studies. However, actions unrelated to gap junction block have been repeatedly suggested in the literature for these compounds. We report here the findings from a comprehensive study of these compounds in the arterial wall. Rat isolated mesenteric small arteries were studied with respect to isometric tension (myography), [Ca2+]i (Ca2+-sensitive dyes), membrane potential and – as a measure of intercellular coupling – input resistance (sharp intracellular glass electrodes). Also, membrane currents (patch-clamp) were measured in isolated smooth muscle cells (SMCs). Confocal imaging was used for visualisation of [Ca2+]i events in single SMCs in the arterial wall. Heptanol (150 μM) activated potassium currents, hyperpolarised the membrane, inhibited the Ca2+ current, and reduced [Ca2+]i and tension, but had little effect on input resistance. Only at concentrations above 200 μM did heptanol elevate input resistance, desynchronise SMCs and abolish vasomotion. 18βGA (30 μM) not only increased input resistance and desynchronised SMCs but also had nonjunctional effects on membrane currents. 18αGA (100 μM) had no significant effects on tension, [Ca2+]i, total membrane current and synchronisation in vascular smooth muscle. We conclude that in mesenteric small arteries, heptanol and 18βGA have important nonjunctional effects at concentrations where they have little or no effect on intercellular communication. Thus, the effects of heptanol and 18βGA on vascular function cannot be interpreted as being caused only by effects on gap junctions. 18αGA apparently does not block communication between SMCs in these arteries, although an effect on myoendothelial gap junctions cannot be excluded. PMID:15210581

  15. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  16. Airborne camera and spectrometer experiments and data evaluation

    NASA Astrophysics Data System (ADS)

    Lehmann, F. F.; Bucher, T.; Pless, S.; Wohlfeil, J.; Hirschmüller, H.

    2009-09-01

    New stereo push broom camera systems have been developed at German Aerospace Centre (DLR). The new small multispectral systems (Multi Functional Camerahead - MFC, Advanced Multispectral Scanner - AMS) are light weight, compact and display three or five RGB stereo lines of 8000, 10 000 or 14 000 pixels, which are used for stereo processing and the generation of Digital Surface Models (DSM) and near True Orthoimage Mosaics (TOM). Simultaneous acquisition of different types of MFC-cameras for infrared and RGB data has been successfully tested. All spectral channels record the image data in full resolution, pan-sharpening is not necessary. Analogue to the line scanner data an automatic processing chain for UltraCamD and UltraCamX exists. The different systems have been flown for different types of applications; main fields of interest among others are environmental applications (flooding simulations, monitoring tasks, classification) and 3D-modelling (e.g. city mapping). From the DSM and TOM data Digital Terrain Models (DTM) and 3D city models are derived. Textures for the facades are taken from oblique orthoimages, which are created from the same input data as the TOM and the DOM. The resulting models are characterised by high geometric accuracy and the perfect fit of image data and DSM. The DLR is permanently developing and testing a wide range of sensor types and imaging platforms for terrestrial and space applications. The MFC-sensors have been flown in combination with laser systems and imaging spectrometers and special data fusion products have been developed. These products include hyperspectral orthoimages and 3D hyperspectral data.

  17. The Design of Feedback Control Systems Containing a Saturation Type Nonlinearity

    NASA Technical Reports Server (NTRS)

    Schmidt, Stanley F.; Harper, Eleanor V.

    1960-01-01

    A derivation of the optimum response for a step input for plant transfer functions which have an unstable pole and further data on plants with a single zero in the left half of the s plane. The calculated data are presented tabulated in normalized form. Optimum control systems are considered. The optimum system is defined as one which keeps the error as small as possible regardless of the input, under the constraint that the input to the plant (or controlled system) is limited. Intuitive arguments show that in the case where only the error can be sensed directly, the optimum system is obtained from the optimum relay or on-off solution. References to known solutions are presented. For the case when the system is of the sampled-data type, arguments are presented which indicate the optimum sampled-data system may be extremely difficult if not impossible to realize practically except for very simple plant transfer functions. Two examples of aircraft attitude autopilots are presented, one for a statically stable and the other for a statically unstable airframe. The rate of change of elevator motion is assumed limited for these examples. It is shown that by use of nonlinear design techniques described in NASA TN D-20 one can obtain near optimum response for step inputs and reason- able response to sine wave inputs for either case. Also, the nonlinear design prevents inputs from driving the system unstable for either case.

  18. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  19. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    NASA Astrophysics Data System (ADS)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.

  20. A Consistent Treatment of Microwave Emissivity and Radar Backscatter for Retrieval of Precipitation over Water Surfaces

    NASA Technical Reports Server (NTRS)

    Munchak, S. Joseph; Meneghini, Robert; Grecu, Mircea; Olson, William S.

    2016-01-01

    The Global Precipitation Measurement satellite's Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR) are designed to provide the most accurate instantaneous precipitation estimates currently available from space. The GPM Combined Algorithm (CORRA) plays a key role in this process by retrieving precipitation profiles that are consistent with GMI and DPR measurements; therefore, it is desirable that the forward models in CORRA use the same geophysical input parameters. This study explores the feasibility of using internally consistent emissivity and surface backscatter cross-sectional (sigma(sub 0)) models for water surfaces in CORRA. An empirical model for DPR Ku and Ka sigma(sub 0) as a function of 10m wind speed and incidence angle is derived from GMI-only wind retrievals under clear-sky conditions. This allows for the sigma(sub 0) measurements, which are also influenced by path-integrated attenuation (PIA) from precipitation, to be used as input to CORRA and for wind speed to be retrieved as output. Comparisons to buoy data give a wind rmse of 3.7 m/s for Ku+GMI and 3.2 m/s for Ku+Ka+GMI retrievals under precipitation (compared to 1.3 m/s for clear-sky GMI-only), and there is a reduction in bias from GANAL background data (-10%) to the Ku+GMI (-3%) and Ku+Ka+GMI (-5%) retrievals. Ku+GMI retrievals of precipitation increase slightly in light (less than 1 mm/h) and decrease in moderate to heavy precipitation (greater than 1 mm/h). The Ku+Ka+GMI retrievals, being additionally constrained by the Ka reflectivity, increase only slightly in moderate and heavy precipitation at low wind speeds (less than 5 m/s) relative to retrievals using the surface reference estimate of PIA as input.

  1. Content-aware photo collage using circle packing.

    PubMed

    Yu, Zongqiao; Lu, Lin; Guo, Yanwen; Fan, Rongfei; Liu, Mingming; Wang, Wenping

    2014-02-01

    In this paper, we present a novel approach for automatically creating the photo collage that assembles the interest regions of a given group of images naturally. Previous methods on photo collage are generally built upon a well-defined optimization framework, which computes all the geometric parameters and layer indices for input photos on the given canvas by optimizing a unified objective function. The complex nonlinear form of optimization function limits their scalability and efficiency. From the geometric point of view, we recast the generation of collage as a region partition problem such that each image is displayed in its corresponding region partitioned from the canvas. The core of this is an efficient power-diagram-based circle packing algorithm that arranges a series of circles assigned to input photos compactly in the given canvas. To favor important photos, the circles are associated with image importances determined by an image ranking process. A heuristic search process is developed to ensure that salient information of each photo is displayed in the polygonal area resulting from circle packing. With our new formulation, each factor influencing the state of a photo is optimized in an independent stage, and computation of the optimal states for neighboring photos are completely decoupled. This improves the scalability of collage results and ensures their diversity. We also devise a saliency-based image fusion scheme to generate seamless compositive collage. Our approach can generate the collages on nonrectangular canvases and supports interactive collage that allows the user to refine collage results according to his/her personal preferences. We conduct extensive experiments and show the superiority of our algorithm by comparing against previous methods.

  2. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  3. Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

    NASA Astrophysics Data System (ADS)

    Hancock, Matthew C.; Magnan, Jerry F.

    2017-03-01

    To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  4. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  5. Estimation of crown closure from AVIRIS data using regression analysis

    NASA Technical Reports Server (NTRS)

    Staenz, K.; Williams, D. J.; Truchon, M.; Fritz, R.

    1993-01-01

    Crown closure is one of the input parameters used for forest growth and yield modelling. Preliminary work by Staenz et al. indicates that imaging spectrometer data acquired with sensors such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) have some potential for estimating crown closure on a stand level. The objectives of this paper are: (1) to establish a relationship between AVIRIS data and the crown closure derived from aerial photography of a forested test site within the Interior Douglas Fir biogeoclimatic zone in British Columbia, Canada; (2) to investigate the impact of atmospheric effects and the forest background on the correlation between AVIRIS data and crown closure estimates; and (3) to improve this relationship using multiple regression analysis.

  6. Multitask visual learning using genetic programming.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz

    2008-01-01

    We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.

  7. Analysis of Students' Aptitude to Provide Meaning to Images that Represent Cellular Components at the Molecular Level

    PubMed Central

    Dahmani, Hassen-Reda; Schneeberger, Patricia

    2009-01-01

    The number of experimentally derived structures of cellular components is rapidly expanding, and this phenomenon is accompanied by the development of a new semiotic system for teaching. The infographic approach is shifting from a schematic toward a more realistic representation of cellular components. By realistic we mean artist-prepared or computer graphic images that closely resemble experimentally derived structures and are characterized by a low level of styling and simplification. This change brings about a new challenge for teachers: designing course instructions that allow students to interpret these images in a meaningful way. To determine how students deal with this change, we designed several image-based, in-course assessments. The images were highly relevant for the cell biology course but did not resemble any of the images in the teaching documents. We asked students to label the cellular components, describe their function, or both. What we learned from these tests is that realistic images, with a higher apparent level of complexity, do not deter students from investigating their meaning. When given a choice, the students do not necessarily choose the most simplified representation, and they were sensitive to functional indications embedded in realistic images. PMID:19723817

  8. On the effect of velocity gradients on the depth of correlation in μPIV

    NASA Astrophysics Data System (ADS)

    Mustin, B.; Stoeber, B.

    2016-03-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.

  9. Parametrically defined cerebral blood vessels as non-invasive blood input functions for brain PET studies

    NASA Astrophysics Data System (ADS)

    Asselin, Marie-Claude; Cunningham, Vincent J.; Amano, Shigeko; Gunn, Roger N.; Nahmias, Claude

    2004-03-01

    A non-invasive alternative to arterial blood sampling for the generation of a blood input function for brain positron emission tomography (PET) studies is presented. The method aims to extract the dimensions of the blood vessel directly from PET images and to simultaneously correct the radioactivity concentration for partial volume and spillover. This involves simulation of the tomographic imaging process to generate images of different blood vessel and background geometries and selecting the one that best fits, in a least-squares sense, the acquired PET image. A phantom experiment was conducted to validate the method which was then applied to eight subjects injected with 6-[18F]fluoro-L-DOPA and one subject injected with [11C]CO-labelled red blood cells. In the phantom study, the diameter of syringes filled with an 11C solution and inserted into a water-filled cylinder were estimated with an accuracy of half a pixel (1 mm). The radioactivity concentration was recovered to 100 ± 4% in the 8.7 mm diameter syringe, the one that most closely approximated the superior sagittal sinus. In the human studies, the method systematically overestimated the calibre of the superior sagittal sinus by 2-3 mm compared to measurements made in magnetic resonance venograms on the same subjects. Sources of discrepancies related to the anatomy of the blood vessel were found not to be fundamental limitations to the applicability of the method to human subjects. This method has the potential to provide accurate quantification of blood radioactivity concentration from PET images without the need for blood samples, corrections for delay and dispersion, co-registered anatomical images, or manually defined regions of interest.

  10. Pushing spatial and temporal resolution for functional and diffusion MRI in the Human Connectome Project

    PubMed Central

    Uğurbil, Kamil; Xu, Junqian; Auerbach, Edward J.; Moeller, Steen; Vu, An; Duarte-Carvajalino, Julio M.; Lenglet, Christophe; Wu, Xiaoping; Schmitter, Sebastian; Van de Moortele, Pierre Francois; Strupp, John; Sapiro, Guillermo; De Martino, Federico; Wang, Dingxin; Harel, Noam; Garwood, Michael; Chen, Liyong; Feinberg, David A.; Smith, Stephen M.; Miller, Karla L.; Sotiropoulos, Stamatios N; Jbabdi, Saad; Andersson, Jesper L; Behrens, Timothy EJ; Glasser, Matthew F.; Van Essen, David; Yacoub, Essa

    2013-01-01

    The human connectome project (HCP) relies primarily on three complementary magnetic resonance (MR) methods. These are: 1) resting state functional MR imaging (rfMRI) which uses correlations in the temporal fluctuations in an fMRI time series to deduce ‘functional connectivity’; 2) diffusion imaging (dMRI), which provides the input for tractography algorithms used for the reconstruction of the complex axonal fiber architecture; and 3) task based fMRI (tfMRI), which is employed to identify functional parcellation in the human brain in order to assist analyses of data obtained with the first two methods. We describe technical improvements and optimization of these methods as well as instrumental choices that impact speed of acquisition of fMRI and dMRI images at 3 Tesla, leading to whole brain coverage with 2 mm isotropic resolution in 0.7 second for fMRI, and 1.25 mm isotropic resolution dMRI data for tractography analysis with three-fold reduction in total data acquisition time. Ongoing technical developments and optimization for acquisition of similar data at 7 Tesla magnetic field are also presented, targeting higher resolution, specificity of functional imaging signals, mitigation of the inhomogeneous radio frequency (RF) fields and power deposition. Results demonstrate that overall, these approaches represent a significant advance in MR imaging of the human brain to investigate brain function and structure. PMID:23702417

  11. Classification of footwear outsole patterns using Fourier transform and local interest points.

    PubMed

    Richetelli, Nicole; Lee, Mackenzie C; Lasky, Carleen A; Gump, Madison E; Speir, Jacqueline A

    2017-06-01

    Successful classification of questioned footwear has tremendous evidentiary value; the result can minimize the potential suspect pool and link a suspect to a victim, a crime scene, or even multiple crime scenes to each other. With this in mind, several different automated and semi-automated classification models have been applied to the forensic footwear recognition problem, with superior performance commonly associated with two different approaches: correlation of image power (magnitude) or phase, and the use of local interest points transformed using the Scale Invariant Feature Transform (SIFT) and compared using Random Sample Consensus (RANSAC). Despite the distinction associated with each of these methods, all three have not been cross-compared using a single dataset, of limited quality (i.e., characteristic of crime scene-like imagery), and created using a wide combination of image inputs. To address this question, the research presented here examines the classification performance of the Fourier-Mellin transform (FMT), phase-only correlation (POC), and local interest points (transformed using SIFT and compared using RANSAC), as a function of inputs that include mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and variations in print substrate (ceramic tiles, vinyl tiles and paper). Results indicate that POC outperforms both FMT and SIFT+RANSAC, regardless of image input (type, quality and totality), and that the difference in stochastic dominance detected for POC is significant across all image comparison scenarios evaluated in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. [¹⁸F]fluorothymidine-positron emission tomography in patients with locally advanced breast cancer under bevacizumab treatment: usefulness of different quantitative methods of tumor proliferation.

    PubMed

    Marti-Climent, J M; Dominguez-Prado, I; Garcia-Velloso, M J; Boni, V; Peñuelas, I; Toledo, I; Richter, J A

    2014-01-01

    To investigate quantitative methods of tumor proliferation using 3'-[(18)F]fluoro-3'-deoxythymidine ([(18)F]FLT) PET in patients with breast cancer (BC), studied before and after one bevacizumab administration, and to correlate the [(18)F]FLT-PET uptake with the Ki67 index. Thirty patients with newly diagnosed, untreated BC underwent a [(18)F]FLT-PET before and 14 days after bevacizumab treatment. A dynamic scan centered over the tumor began simultaneously with the injection of [(18)F]FLT (385 ± 56 MBq). Image derived input functions were obtained using regions of interest drawn on the left ventricle (LV) and descending aorta (DA). Metabolite corrected blood curves were used as input functions to obtain the kinetic Ki constant using the Patlak graphical analysis (time interval 10-60 min after injection). Maximum SUV values were derived for the intervals 40-60 min (SUV40) and 50-60 min (SUV50). PET parameters were correlated with the Ki67 index obtained staining tumor biopsies. [(18)F]FLT uptake parameters decreased significantly (p<0.001) after treatment: SUV50=3.09 ± 1.21 vs 2.22 ± 0.96; SUV40=3.00 ± 1.18 vs 2.14 ± 0.95, Ki_LV(10-3)=52[22-116] vs 38[13-80] and Ki_DA(10-3)=49[15-129] vs 33[11-98]. Consistency interclass correlation coefficients within SUV and within Ki were high. Changes of SUV50 and Ki_DA between baseline PET and after one bevacizumab dose PET correlated with changes in Ki67 index (r-Pearson=0.35 and 0.26, p=0.06 and 0.16, respectively). [(18)F]FLT-PET is useful to demonstrate proliferative changes after a dose of bevacizumab in patients with BC. Quantification of tumor proliferation by means of SUV and Ki has shown similar results, but SUV50 obtained better results. A correlation between [(18)F]FLT changes and Ki67 index was observed. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  13. Modeling UV-B Effects on Primary Production Throughout the Southern Ocean Using Multi-Sensor Satellite Data

    NASA Technical Reports Server (NTRS)

    Lubin, Dan

    2001-01-01

    This study has used a combination of ocean color, backscattered ultraviolet, and passive microwave satellite data to investigate the impact of the springtime Antarctic ozone depletion on the base of the Antarctic marine food web - primary production by phytoplankton. Spectral ultraviolet (UV) radiation fields derived from the satellite data are propagated into the water column where they force physiologically-based numerical models of phytoplankton growth. This large-scale study has been divided into two components: (1) the use of Total Ozone Mapping Spectrometer (TOMS) and Special Sensor Microwave Imager (SSM/I) data in conjunction with radiative transfer theory to derive the surface spectral UV irradiance throughout the Southern Ocean; and (2) the merging of these UV irradiances with the climatology of chlorophyll derived from SeaWiFS data to specify the input data for the physiological models.

  14. Rotation invariant deep binary hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  15. Heart imaging method

    DOEpatents

    Collins, H. Dale; Gribble, R. Parks; Busse, Lawrence J.

    1991-01-01

    A method for providing an image of the human heart's electrical system derives time-of-flight data from an array of EKG electrodes and this data is transformed into phase information. The phase information, treated as a hologram, is reconstructed to provide an image in one or two dimensions of the electrical system of the functioning heart.

  16. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  17. Interactive digital image manipulation system

    NASA Technical Reports Server (NTRS)

    Henze, J.; Dezur, R.

    1975-01-01

    The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.

  18. Manipulation-free cultures of human iPSC-derived cardiomyocytes offer a novel screening method for cardiotoxicity.

    PubMed

    Rajasingh, Sheeja; Isai, Dona Greta; Samanta, Saheli; Zhou, Zhi-Gang; Dawn, Buddhadeb; Kinsey, William H; Czirok, Andras; Rajasingh, Johnson

    2018-04-05

    Induced pluripotent stem cell (iPSC)-based cardiac regenerative medicine requires the efficient generation, structural soundness and proper functioning of mature cardiomyocytes, derived from the patient's somatic cells. The most important functional property of cardiomyocytes is the ability to contract. Currently available methods routinely used to test and quantify cardiomyocyte function involve techniques that are labor-intensive, invasive, require sophisticated instruments or can adversely affect cell vitality. We recently developed optical flow imaging method analyses and quantified cardiomyocyte contractile kinetics from video microscopic recordings without compromising cell quality. Specifically, our automated particle image velocimetry (PIV) analysis of phase-contrast video images captured at a high frame rate yields statistical measures characterizing the beating frequency, amplitude, average waveform and beat-to-beat variations. Thus, it can be a powerful assessment tool to monitor cardiomyocyte quality and maturity. Here we demonstrate the ability of our analysis to characterize the chronotropic responses of human iPSC-derived cardiomyocytes to a panel of ion channel modulators and also to doxorubicin, a chemotherapy agent with known cardiotoxic side effects. We conclude that the PIV-derived beat patterns can identify the elongation or shortening of specific phases in the contractility cycle, and the obtained chronotropic responses are in accord with known clinical outcomes. Hence, this system can serve as a powerful tool to screen the new and currently available pharmacological compounds for cardiotoxic effects.

  19. Phase contrast imaging X-ray computed tomography: quantitative characterization of human patellar cartilage matrix with topological and geometrical features

    NASA Astrophysics Data System (ADS)

    Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel

    2014-03-01

    Current assessment of cartilage is primarily based on identification of indirect markers such as joint space narrowing and increased subchondral bone density on x-ray images. In this context, phase contrast CT imaging (PCI-CT) has recently emerged as a novel imaging technique that allows a direct examination of chondrocyte patterns and their correlation to osteoarthritis through visualization of cartilage soft tissue. This study investigates the use of topological and geometrical approaches for characterizing chondrocyte patterns in the radial zone of the knee cartilage matrix in the presence and absence of osteoarthritic damage. For this purpose, topological features derived from Minkowski Functionals and geometric features derived from the Scaling Index Method (SIM) were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of healthy and osteoarthritic specimens of human patellar cartilage. The extracted features were then used in a machine learning task involving support vector regression to classify ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). The best classification performance was observed with high-dimensional geometrical feature vectors derived from SIM (0.95 ± 0.06) which outperformed all Minkowski Functionals (p < 0.001). These results suggest that such quantitative analysis of chondrocyte patterns in human patellar cartilage matrix involving SIM-derived geometrical features can distinguish between healthy and osteoarthritic tissue with high accuracy.

  20. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  1. Transform methods for precision continuum and control models of flexible space structures

    NASA Technical Reports Server (NTRS)

    Lupi, Victor D.; Turner, James D.; Chun, Hon M.

    1991-01-01

    An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.

  2. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  3. Mathematical prediction of core body temperature from environment, activity, and clothing: The heat strain decision aid (HSDA).

    PubMed

    Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W

    2017-02-01

    Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.

  4. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  5. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    PubMed

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  6. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  7. Integrated software environment based on COMKAT for analyzing tracer pharmacokinetics with molecular imaging.

    PubMed

    Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F

    2010-01-01

    An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.

  8. Using LiDAR and quickbird data to model plant production and quantify uncertainties associated with wetland detection and land cover generalizations

    USGS Publications Warehouse

    Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  9. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    NASA Technical Reports Server (NTRS)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  10. Dynamics of Female Pelvic Floor Function Using Urodynamics, Ultrasound and Magnetic Resonance Imaging (MRI)

    PubMed Central

    Constantinou, Christos E.

    2009-01-01

    In this review the diagnostic potential of evaluating female pelvic floor muscle (PFM)) function using magnetic and ultrasound imaging in the context of urodynamic observations is considered in terms of determining the mechanisms of urinary continence. A new approach is used to consider the dynamics of PFM activity by introducing new parameters derived from imaging. Novel image processing techniques are applied to illustrate the static anatomy and dynamics PFM function of stress incontinent women pre and post operatively as compared to asymptomatic subjects. Function was evaluated from the dynamics of organ displacement produced during voluntary and reflex activation. Technical innovations include the use of ultrasound analysis of movement of structures during maneuvers that are associated with external stimuli. Enabling this approach is the development of criteria and fresh and unique parameters that define the kinematics of PFM function. Principal among these parameters, are displacement, velocity, acceleration and the trajectory of pelvic floor landmarks. To accomplish this objective, movement detection, including motion tracking algorithms and segmentation algorithms were developed to derive new parameters of trajectory, displacement, velocity and acceleration, and strain of pelvic structures during different maneuvers. Results highlight the importance of timing the movement and deformation to fast and stressful maneuvers, which are important for understanding the neuromuscular control and function of PFM. Furthermore, observations suggest that timing of responses is a significant factor separating the continent from the incontinent subjects. PMID:19303690

  11. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  12. Ultrasound breast imaging using frequency domain reverse time migration

    NASA Astrophysics Data System (ADS)

    Roy, O.; Zuberi, M. A. H.; Pratt, R. G.; Duric, N.

    2016-04-01

    Conventional ultrasonography reconstruction techniques, such as B-mode, are based on a simple wave propagation model derived from a high frequency approximation. Therefore, to minimize model mismatch, the central frequency of the input pulse is typically chosen between 3 and 15 megahertz. Despite the increase in theoretical resolution, operating at higher frequencies comes at the cost of lower signal-to-noise ratio. This ultimately degrades the image contrast and overall quality at higher imaging depths. To address this issue, we investigate a reflection imaging technique, known as reverse time migration, which uses a more accurate propagation model for reconstruction. We present preliminary simulation results as well as physical phantom image reconstructions obtained using data acquired with a breast imaging ultrasound tomography prototype. The original reconstructions are filtered to remove low-wavenumber artifacts that arise due to the inclusion of the direct arrivals. We demonstrate the advantage of using an accurate sound speed model in the reverse time migration process. We also explain how the increase in computational complexity can be mitigated using a frequency domain approach and a parallel computing platform.

  13. Restoration of STORM images from sparse subset of localizations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.

    2016-02-01

    To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.

  14. Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc

    2010-05-01

    Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (< 30 m) they have been considered as an evaluation tool. The snow cover maps are then compared with the hydrological GEOtop model outputs. The main objectives of this work are: 1. Evaluation of the MODIS snow cover algorithm using LANDSAT data 2. Investigation of snow cover, and snow cover duration for the area of interest for South Tyrol 3. Derivation and interpretation of the snow line for the seven winter seasons 4. An evaluation of the model outputs in order to determine the situations in which the remotely sensed data can be used to improve the model prediction of snow coverage and related variables References [1] Rigon R., Bertoldi G. and Over T.M. 2006. GEOtop: A Distributed Hydrological Model with Coupled Water and Energy Budgets, Journal of Hydrometeorology, 7: 371-388. [2] Rastner P., Irsara L., Schellenberger T., Della Chiesa S., Bertoldi G., Endrizzi S., Notarnicola C., Steurer C., Zebisch M. 2009. Monitoraggio del manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.

  15. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  16. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  17. The transfer functions of cardiac tissue during stochastic pacing.

    PubMed

    de Lange, Enno; Kucera, Jan P

    2009-01-01

    The restitution properties of cardiac action potential duration (APD) and conduction velocity (CV) are important factors in arrhythmogenesis. They determine alternans, wavebreak, and the patterns of reentrant arrhythmias. We developed a novel approach to characterize restitution using transfer functions. Transfer functions relate an input and an output quantity in terms of gain and phase shift in the complex frequency domain. We derived an analytical expression for the transfer function of interbeat intervals (IBIs) during conduction from one site (input) to another site downstream (output). Transfer functions can be efficiently obtained using a stochastic pacing protocol. Using simulations of conduction and extracellular mapping of strands of neonatal rat ventricular myocytes, we show that transfer functions permit the quantification of APD and CV restitution slopes when it is difficult to measure APD directly. We find that the normally positive CV restitution slope attenuates IBI variations. In contrast, a negative CV restitution slope (induced by decreasing extracellular [K(+)]) amplifies IBI variations with a maximum at the frequency of alternans. Hence, it potentiates alternans and renders conduction unstable, even in the absence of APD restitution. Thus, stochastic pacing and transfer function analysis represent a powerful strategy to evaluate restitution and the stability of conduction.

  18. Microchannel cross load array with dense parallel input

    DOEpatents

    Swierkowski, Stefan P.

    2004-04-06

    An architecture or layout for microchannel arrays using T or Cross (+) loading for electrophoresis or other injection and separation chemistry that are performed in microfluidic configurations. This architecture enables a very dense layout of arrays of functionally identical shaped channels and it also solves the problem of simultaneously enabling efficient parallel shapes and biasing of the input wells, waste wells, and bias wells at the input end of the separation columns. One T load architecture uses circular holes with common rows, but not columns, which allows the flow paths for each channel to be identical in shape, using multiple mirror image pieces. Another T load architecture enables the access hole array to be formed on a biaxial, collinear grid suitable for EDM micromachining (square holes), with common rows and columns.

  19. Pulse pileup statistics for energy discriminating photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir

    Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less

  20. On the Spectrum of the Plenoptic Function.

    PubMed

    Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike

    2014-02-01

    The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.

  1. Near infrared spatial frequency domain fluorescence imaging of tumor phantoms containing erythrocyte-derived optical nanoplatforms

    NASA Astrophysics Data System (ADS)

    Burns, Joshua M.; Schaefer, Elise; Anvari, Bahman

    2018-02-01

    Light-activated theranostic constructs provide a multi-functional platform for optical imaging and phototherapeutic applications. Our group has engineered nano-sized vesicles derived from erythrocytes that encapsulate the FDAapproved near infrared (NIR) absorber indocyanine green (ICG). We refer to these constructs as NIR erythrocytemimicking transducers (NETs). Once photo-excited by NIR light these constructs can transduce the photons energy to emit fluorescence, generate heat, or induce chemical reactions. In this study, we investigated fluorescence imaging of NETs embedded within tumor phantoms using spatial frequency domain imaging (SFDI). Using SFDI, we were able to fluorescently image simulated tumors doped with different concentration of NETs. These preliminary results suggest that NETs can be used in conjunction with SFDI for potential tumor imaging applications.

  2. An improved artifact removal in exposure fusion with local linear constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Yu, Mali

    2018-04-01

    In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.

  3. Quantitative Image Restoration in Bright Field Optical Microscopy.

    PubMed

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Field theory of pattern identification

    NASA Astrophysics Data System (ADS)

    Agu, Masahiro

    1988-06-01

    Based on the psychological experimental fact that images in mental space are transformed into other images for pattern identification, a field theory of pattern identification of geometrical patterns is developed with the use of gauge field theory in Euclidean space. Here, the ``image'' or state function ψ[χ] of the brain reacting to a geometrical pattern χ is made to correspond to the electron's wave function in Minkowski space. The pattern identification of the pattern χ with the modified pattern χ+Δχ is assumed to be such that their images ψ[χ] and ψ[χ+Δχ] in the brain are transformable with each other through suitable transformation groups such as parallel transformation, dilatation, or rotation. The transformation group is called the ``image potential'' which corresponds to the vector potential of the gauge field. An ``image field'' derived from the image potential is found to be induced in the brain when the two images ψ[χ] and ψ[χ+Δχ] are not transformable through suitable transformation groups or gauge transformations. It is also shown that, when the image field exists, the final state of the image ψ[χ] is expected to be different, depending on the paths of modifications of the pattern χ leading to a final pattern. The above fact is interpreted as a version of the Aharonov and Bohm effect of the electron's wave function [A. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959)]. An excitation equation of the image field is also derived by postulating that patterns are identified maximally for the purpose of minimizing the number of memorized standard patterns.

  5. Input design for identification of aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Hall, W. E., Jr.

    1975-01-01

    An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.

  6. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  7. 3D-templated, fully automated microfluidic input/output multiplexer for endocrine tissue culture and secretion sampling.

    PubMed

    Li, Xiangpeng; Brooks, Jessica C; Hu, Juan; Ford, Katarena I; Easley, Christopher J

    2017-01-17

    A fully automated, 16-channel microfluidic input/output multiplexer (μMUX) has been developed for interfacing to primary cells and to improve understanding of the dynamics of endocrine tissue function. The device utilizes pressure driven push-up valves for precise manipulation of nutrient input and hormone output dynamics, allowing time resolved interrogation of the cells. The ability to alternate any of the 16 channels from input to output, and vice versa, provides for high experimental flexibility without the need to alter microchannel designs. 3D-printed interface templates were custom designed to sculpt the above-channel polydimethylsiloxane (PDMS) in microdevices, creating millimeter scale reservoirs and confinement chambers to interface primary murine islets and adipose tissue explants to the μMUX sampling channels. This μMUX device and control system was first programmed for dynamic studies of pancreatic islet function to collect ∼90 minute insulin secretion profiles from groups of ∼10 islets. The automated system was also operated in temporal stimulation and cell imaging mode. Adipose tissue explants were exposed to a temporal mimic of post-prandial insulin and glucose levels, while simultaneous switching between labeled and unlabeled free fatty acid permitted fluorescent imaging of fatty acid uptake dynamics in real time over a ∼2.5 hour period. Application with varying stimulation and sampling modes on multiple murine tissue types highlights the inherent flexibility of this novel, 3D-templated μMUX device. The tissue culture reservoirs and μMUX control components presented herein should be adaptable as individual modules in other microfluidic systems, such as organ-on-a-chip devices, and should be translatable to different tissues such as liver, heart, skeletal muscle, and others.

  8. Digital data from the Questa-San Luis and Santa Fe East helicopter magnetic surveys in Santa Fe and Taos Counties, New Mexico, and Costilla County, Colorado

    USGS Publications Warehouse

    Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,

    2006-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  9. Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging

    NASA Astrophysics Data System (ADS)

    Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.

    2008-12-01

    Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.

  10. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  11. Using aerial images for establishing a workflow for the quantification of water management measures

    NASA Astrophysics Data System (ADS)

    Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg

    2017-04-01

    Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.

  12. Phenomenological constraints on A N in p ↑ p → π X from Lorentz invariance relations

    DOE PAGES

    Gamberg, Leonard; Kang, Zhong-Bo; Pitonyak, Daniel; ...

    2017-04-27

    Here, we present a new analysis of A N in p ↑ p → πX within the collinear twist-3 factorization formalism. We incorporate recently derived Lorentz invariance relations into our calculation and focus on input from the kinematical twist-3 functions, which are weighted integrals of transverse momentum dependent (TMD) functions. Particularly, we use the latest extractions of the Sivers and Collins functions with TMD evolution to compute certain terms in AN . Consequently, we are able to constrain the remaining contributions from the lesser known dynamical twist-3 correlators.

  13. AMPS definition study on Optical Band Imager and Photometer System (OBIPS)

    NASA Technical Reports Server (NTRS)

    Davis, T. N.; Deehr, C. S.; Hallinan, T. J.; Wescott, E. M.

    1975-01-01

    A study was conducted to define the characteristics of a modular optical diagnostic system (OBIPS) for AMPS, to provide input to Phase B studies, and to give information useful for experiment planning and design of other instrumentation. The system described consists of visual and UV-band imagers and visual and UV-band photometers; of these the imagers are most important because of their ability to measure intensity as a function of two spatial dimensions and time with high resolution. The various subsystems of OBIPS are in themselves modular with modules having a high degree of interchangeability for versatility, economy, and redundancy.

  14. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  15. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  16. Performance of SEM scintillation detector evaluated by modulation transfer function and detective quantum efficiency function.

    PubMed

    Bok, Jan; Schauer, Petr

    2014-01-01

    In the paper, the SEM detector is evaluated by the modulation transfer function (MTF) which expresses the detector's influence on the SEM image contrast. This is a novel approach, since the MTF was used previously to describe only the area imaging detectors, or whole imaging systems. The measurement technique and calculation of the MTF for the SEM detector are presented. In addition, the measurement and calculation of the detective quantum efficiency (DQE) as a function of the spatial frequency for the SEM detector are described. In this technique, the time modulated e-beam is used in order to create well-defined input signal for the detector. The MTF and DQE measurements are demonstrated on the Everhart-Thornley scintillation detector. This detector was alternated using the YAG:Ce, YAP:Ce, and CRY18 single-crystal scintillators. The presented MTF and DQE characteristics show good imaging properties of the detectors with the YAP:Ce or CRY18 scintillator, especially for a specific type of the e-beam scan. The results demonstrate the great benefit of the description of SEM detectors using the MTF and DQE. In addition, point-by-point and continual-sweep e-beam scans in SEM were discussed and their influence on the image quality was revealed using the MTF. © 2013 Wiley Periodicals, Inc.

  17. Robust reinforcement learning.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2005-02-01

    This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.

  18. Correspondence of the brain's functional architecture during activation and rest.

    PubMed

    Smith, Stephen M; Fox, Peter T; Miller, Karla L; Glahn, David C; Fox, P Mickle; Mackay, Clare E; Filippini, Nicola; Watkins, Kate E; Toro, Roberto; Laird, Angela R; Beckmann, Christian F

    2009-08-04

    Neural connections, providing the substrate for functional networks, exist whether or not they are functionally active at any given moment. However, it is not known to what extent brain regions are continuously interacting when the brain is "at rest." In this work, we identify the major explicit activation networks by carrying out an image-based activation network analysis of thousands of separate activation maps derived from the BrainMap database of functional imaging studies, involving nearly 30,000 human subjects. Independently, we extract the major covarying networks in the resting brain, as imaged with functional magnetic resonance imaging in 36 subjects at rest. The sets of major brain networks, and their decompositions into subnetworks, show close correspondence between the independent analyses of resting and activation brain dynamics. We conclude that the full repertoire of functional networks utilized by the brain in action is continuously and dynamically "active" even when at "rest."

  19. Optimization of a hardware implementation for pulse coupled neural networks for image applications

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal

    2010-04-01

    Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.

  20. 40 CFR 60.44c - Compliance and performance test methods and procedures for sulfur dioxide.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = Fraction of the total heat input from fuel combustion derived from coal and oil, as determined by... total heat input from fuel combustion derived from coal and oil, as determined by applicable procedures... generating unit load during the 30-day period does not have to be the maximum design heat input capacity, but...

  1. Computing and analyzing the sensitivity of MLP due to the errors of the i.i.d. inputs and weights based on CLT.

    PubMed

    Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy

    2010-12-01

    In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.

  2. Digital Data from the Great Sand Dunes and Poncha Springs Aeromagnetic Surveys, South-Central Colorado

    USGS Publications Warehouse

    Drenth, B.J.; Grauch, V.J.S.; Bankey, Viki; New Sense Geophysics, Ltd.

    2009-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for two high-resolution aeromagnetic surveys in south-central Colorado: one in the eastern San Luis Valley, Alamosa and Saguache Counties, and the other in the southern Upper Arkansas Valley, Chaffee County. In the San Luis Valley, the Great Sand Dunes survey covers a large part of Great Sand Dunes National Park and Preserve and extends south along the mountain front to the foot of Mount Blanca. In the Upper Arkansas Valley, the Poncha Springs survey covers the town of Poncha Springs and vicinity. The digital files include grids, images, and flight-line data. Several derivative products from these data are also presented as grids and images, including two grids of reduced-to-pole aeromagnetic data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  3. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  4. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  5. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  6. New generation of hydraulic pedotransfer functions for Europe

    PubMed Central

    Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G

    2015-01-01

    A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465

  7. Flight data identification of six degree-of-freedom stability and control derivatives of a large crane type helicopter

    NASA Technical Reports Server (NTRS)

    Tomaine, R. L.

    1976-01-01

    Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.

  8. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  9. Low-noise and high-speed photodetection system using optical feedback with a current amplification function

    NASA Astrophysics Data System (ADS)

    Akiba, M.

    2015-09-01

    A photodetection system with an optical-feedback circuit accompanied by current amplification was fabricated to minimize the drawbacks associated with a transimpedance amplifier (TIA) with a very high resistance feedback resistor. Current amplification was implemented by extracting an output light from the same light source that emitted the feedback light. The current gain corresponds to the ratio of the photocurrent created by the output light to that created by the feedback light because the feedback current value is identical to the input photocurrent value generated by an input light to be measured. The current gain has no theoretical limit. The output light was detected by a photodiode with a TIA having a small feedback resistance. The expression for the input-referred noise current of the optical-feedback photodetection system was derived, and the trade-off between sensitivity and response, which is a characteristic of TIA, was found to considerably improve. An optical-feedback photodetection system with an InGaAs pin photodiode was fabricated. The measured noise equivalent power of the system was 1.7 fW/Hz1/2 at 10 Hz and 1.3 μm, which is consistent with the derived expression. The time response of the system was found to deteriorate with decreasing photocurrent. The 50% rise time for a light pulse input increased from 3.1 μs at a photocurrent of 10 nA to 15 μs at photocurrents below 10 pA. The bandwidth of the input-referred noise current was 7 kHz, which is consistent with rise times below 10 pA.

  10. Low-noise and high-speed photodetection system using optical feedback with a current amplification function.

    PubMed

    Akiba, M

    2015-09-01

    A photodetection system with an optical-feedback circuit accompanied by current amplification was fabricated to minimize the drawbacks associated with a transimpedance amplifier (TIA) with a very high resistance feedback resistor. Current amplification was implemented by extracting an output light from the same light source that emitted the feedback light. The current gain corresponds to the ratio of the photocurrent created by the output light to that created by the feedback light because the feedback current value is identical to the input photocurrent value generated by an input light to be measured. The current gain has no theoretical limit. The output light was detected by a photodiode with a TIA having a small feedback resistance. The expression for the input-referred noise current of the optical-feedback photodetection system was derived, and the trade-off between sensitivity and response, which is a characteristic of TIA, was found to considerably improve. An optical-feedback photodetection system with an InGaAs pin photodiode was fabricated. The measured noise equivalent power of the system was 1.7 fW/Hz(1/2) at 10 Hz and 1.3 μm, which is consistent with the derived expression. The time response of the system was found to deteriorate with decreasing photocurrent. The 50% rise time for a light pulse input increased from 3.1 μs at a photocurrent of 10 nA to 15 μs at photocurrents below 10 pA. The bandwidth of the input-referred noise current was 7 kHz, which is consistent with rise times below 10 pA.

  11. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  12. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  13. Establishing the resting state default mode network derived from functional magnetic resonance imaging tasks as an endophenotype: A twins study.

    PubMed

    Korgaonkar, Mayuresh S; Ram, Kaushik; Williams, Leanne M; Gatt, Justine M; Grieve, Stuart M

    2014-08-01

    The resting state default mode network (DMN) has been shown to characterize a number of neurological and psychiatric disorders. Evidence suggests an underlying genetic basis for this network and hence could serve as potential endophenotype for these disorders. Heritability is a defining criterion for endophenotypes. The DMN is measured either using a resting-state functional magnetic resonance imaging (fMRI) scan or by extracting resting state activity from task-based fMRI. The current study is the first to evaluate heritability of this task-derived resting activity. 250 healthy adult twins (79 monozygotic and 46 dizygotic same sex twin pairs) completed five cognitive and emotion processing fMRI tasks. Resting state DMN functional connectivity was derived from these five fMRI tasks. We validated this approach by comparing connectivity estimates from task-derived resting activity for all five fMRI tasks, with those obtained using a dedicated task-free resting state scan in an independent cohort of 27 healthy individuals. Structural equation modeling using the classic twin design was used to estimate the genetic and environmental contributions to variance for the resting-state DMN functional connectivity. About 9-41% of the variance in functional connectivity between the DMN nodes was attributed to genetic contribution with the greatest heritability found for functional connectivity between the posterior cingulate and right inferior parietal nodes (P<0.001). Our data provide new evidence that functional connectivity measures from the intrinsic DMN derived from task-based fMRI datasets are under genetic control and have the potential to serve as endophenotypes for genetically predisposed psychiatric and neurological disorders. Copyright © 2014 Wiley Periodicals, Inc.

  14. Performance of an artificial neural network for vertical root fracture detection: an ex vivo study.

    PubMed

    Kositbowornchai, Suwadee; Plermkamon, Supattra; Tangkosol, Tawan

    2013-04-01

    To develop an artificial neural network for vertical root fracture detection. A probabilistic neural network design was used to clarify whether a tooth root was sound or had a vertical root fracture. Two hundred images (50 sound and 150 vertical root fractures) derived from digital radiography--used to train and test the artificial neural network--were divided into three groups according to the number of training and test data sets: 80/120,105/95 and 130/70, respectively. Either training or tested data were evaluated using grey-scale data per line passing through the root. These data were normalized to reduce the grey-scale variance and fed as input data of the neural network. The variance of function in recognition data was calculated between 0 and 1 to select the best performance of neural network. The performance of the neural network was evaluated using a diagnostic test. After testing data under several variances of function, we found the highest sensitivity (98%), specificity (90.5%) and accuracy (95.7%) occurred in Group three, for which the variance of function in recognition data was between 0.025 and 0.005. The neural network designed in this study has sufficient sensitivity, specificity and accuracy to be a model for vertical root fracture detection. © 2012 John Wiley & Sons A/S.

  15. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  16. Functional imaging of cortical feedback projections to the olfactory bulb

    PubMed Central

    Rothermel, Markus; Wachowiak, Matt

    2014-01-01

    Processing of sensory information is substantially shaped by centrifugal, or feedback, projections from higher cortical areas, yet the functional properties of these projections are poorly characterized. Here, we used genetically-encoded calcium sensors (GCaMPs) to functionally image activation of centrifugal projections targeting the olfactory bulb (OB). The OB receives massive centrifugal input from cortical areas but there has been as yet no characterization of their activity in vivo. We focused on projections to the OB from the anterior olfactory nucleus (AON), a major source of cortical feedback to the OB. We expressed GCaMP selectively in AON projection neurons using a mouse line expressing Cre recombinase (Cre) in these neurons and Cre-dependent viral vectors injected into AON, allowing us to image GCaMP fluorescence signals from their axon terminals in the OB. Electrical stimulation of AON evoked large fluorescence signals that could be imaged from the dorsal OB surface in vivo. Surprisingly, odorants also evoked large signals that were transient and coupled to odorant inhalation both in the anesthetized and awake mouse, suggesting that feedback from AON to the OB is rapid and robust across different brain states. The strength of AON feedback signals increased during wakefulness, suggesting a state-dependent modulation of cortical feedback to the OB. Two-photon GCaMP imaging revealed that different odorants activated different subsets of centrifugal AON axons and could elicit both excitation and suppression in different axons, indicating a surprising richness in the representation of odor information by cortical feedback to the OB. Finally, we found that activating neuromodulatory centers such as basal forebrain drove AON inputs to the OB independent of odorant stimulation. Our results point to the AON as a multifunctional cortical area that provides ongoing feedback to the OB and also serves as a descending relay for other neuromodulatory systems. PMID:25071454

  17. Iterative Structural and Functional Synergistic Resolution Recovery (iSFS-RR) Applied to PET-MR Images in Epilepsy

    NASA Astrophysics Data System (ADS)

    Silva-Rodríguez, J.; Cortés, J.; Rodríguez-Osorio, X.; López-Urdaneta, J.; Pardo-Montero, J.; Aguiar, P.; Tsoumpas, C.

    2016-10-01

    Structural Functional Synergistic Resolution Recovery (SFS-RR) is a technique that uses supplementary structural information from MR or CT to improve the spatial resolution of PET or SPECT images. This wavelet-based method may have a potential impact on the clinical decision-making of brain focal disorders such as refractory epilepsy, since it can produce images with better quantitative accuracy and enhanced detectability. In this work, a method for the iterative application of SFS-RR (iSFS-RR) was firstly developed and optimized in terms of convergence and input voxel size, and the corrected images were used for the diagnosis of 18 patients with refractory epilepsy. To this end, PET/MR images were clinically evaluated through visual inspection, atlas-based asymmetry indices (AIs) and SPM (Statistical Parametric Mapping) analysis, using uncorrected images and images corrected with SFS-RR and iSFS-RR. Our results showed that the sensitivity can be increased from 78% for uncorrected images, to 84% for SFS-RR and 94% for the proposed iSFS-RR. Thus, the proposed methodology has demonstrated the potential to improve the management of refractory epilepsy patients in the clinical routine.

  18. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  19. Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall.

    PubMed

    Hampson, Robert E; Song, Dong; Robinson, Brian S; Fetterhoff, Dustin; Dakos, Alexander S; Roeder, Brent M; She, Xiwei; Wicks, Robert T; Witcher, Mark R; Couture, Daniel E; Laxton, Adrian W; Munger-Clary, Heidi; Popli, Gautam; Sollman, Myriam J; Whitlow, Christopher T; Marmarelis, Vasilis Z; Berger, Theodore W; Deadwyler, Sam A

    2018-06-01

    We demonstrate here the first successful implementation in humans of a proof-of-concept system for restoring and improving memory function via facilitation of memory encoding using the patient's own hippocampal spatiotemporal neural codes for memory. Memory in humans is subject to disruption by drugs, disease and brain injury, yet previous attempts to restore or rescue memory function in humans typically involved only nonspecific, modulation of brain areas and neural systems related to memory retrieval. We have constructed a model of processes by which the hippocampus encodes memory items via spatiotemporal firing of neural ensembles that underlie the successful encoding of short-term memory. A nonlinear multi-input, multi-output (MIMO) model of hippocampal CA3 and CA1 neural firing is computed that predicts activation patterns of CA1 neurons during the encoding (sample) phase of a delayed match-to-sample (DMS) human short-term memory task. MIMO model-derived electrical stimulation delivered to the same CA1 locations during the sample phase of DMS trials facilitated short-term/working memory by 37% during the task. Longer term memory retention was also tested in the same human subjects with a delayed recognition (DR) task that utilized images from the DMS task, along with images that were not from the task. Across the subjects, the stimulated trials exhibited significant improvement (35%) in both short-term and long-term retention of visual information. These results demonstrate the facilitation of memory encoding which is an important feature for the construction of an implantable neural prosthetic to improve human memory.

  20. Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall

    NASA Astrophysics Data System (ADS)

    Hampson, Robert E.; Song, Dong; Robinson, Brian S.; Fetterhoff, Dustin; Dakos, Alexander S.; Roeder, Brent M.; She, Xiwei; Wicks, Robert T.; Witcher, Mark R.; Couture, Daniel E.; Laxton, Adrian W.; Munger-Clary, Heidi; Popli, Gautam; Sollman, Myriam J.; Whitlow, Christopher T.; Marmarelis, Vasilis Z.; Berger, Theodore W.; Deadwyler, Sam A.

    2018-06-01

    Objective. We demonstrate here the first successful implementation in humans of a proof-of-concept system for restoring and improving memory function via facilitation of memory encoding using the patient’s own hippocampal spatiotemporal neural codes for memory. Memory in humans is subject to disruption by drugs, disease and brain injury, yet previous attempts to restore or rescue memory function in humans typically involved only nonspecific, modulation of brain areas and neural systems related to memory retrieval. Approach. We have constructed a model of processes by which the hippocampus encodes memory items via spatiotemporal firing of neural ensembles that underlie the successful encoding of short-term memory. A nonlinear multi-input, multi-output (MIMO) model of hippocampal CA3 and CA1 neural firing is computed that predicts activation patterns of CA1 neurons during the encoding (sample) phase of a delayed match-to-sample (DMS) human short-term memory task. Main results. MIMO model-derived electrical stimulation delivered to the same CA1 locations during the sample phase of DMS trials facilitated short-term/working memory by 37% during the task. Longer term memory retention was also tested in the same human subjects with a delayed recognition (DR) task that utilized images from the DMS task, along with images that were not from the task. Across the subjects, the stimulated trials exhibited significant improvement (35%) in both short-term and long-term retention of visual information. Significance. These results demonstrate the facilitation of memory encoding which is an important feature for the construction of an implantable neural prosthetic to improve human memory.

Top