Evidential Reasoning in Expert Systems for Image Analysis.
1985-02-01
techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Uses of software in digital image analysis: a forensic report
NASA Astrophysics Data System (ADS)
Sharma, Mukesh; Jha, Shailendra
2010-02-01
Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Slide Set: Reproducible image analysis and batch processing with ImageJ.
Nanes, Benjamin A
2015-11-01
Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
NASA Astrophysics Data System (ADS)
Wunderlich, Adam; Goossens, Bart
2014-03-01
The majority of the literature on task-based image quality assessment has focused on lesion detection tasks, using the receiver operating characteristic (ROC) curve, or related variants, to measure performance. However, since many clinical image evaluation tasks involve both detection and estimation (e.g., estimation of kidney stone composition, estimation of tumor size), there is a growing interest in performance evaluation for joint detection and estimation tasks. To evaluate observer performance on such tasks, Clarkson introduced the estimation ROC (EROC) curve, and the area under the EROC curve as a summary figure of merit. In the present work, we propose nonparametric estimators for practical EROC analysis from experimental data, including estimators for the area under the EROC curve and its variance. The estimators are illustrated with a practical example comparing MRI images reconstructed from different k-space sampling trajectories.
Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer
1999-03-10
Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 510918 UNCLASSIFIED llllllllllllllllllllllllllllllllllllllll TITLE COGNITIVE TASK ANALYSIS OF THE...DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer 5a. CONTRACT...Ontario . ~ -- . ’ c ... - Incorporated Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer: PWGSC Contract No. W7711-7-7404/001/SV
Sensor image prediction techniques
NASA Astrophysics Data System (ADS)
Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.
1981-02-01
The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
Warbrick, Tracy; Reske, Martina; Shah, N Jon
2014-09-22
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
Genotype-phenotype association study via new multi-task learning model
Huo, Zhouyuan; Shen, Dinggang
2018-01-01
Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
Dedicated tool to assess the impact of a rhetorical task on human body temperature.
Koprowski, Robert; Wilczyński, Sławomir; Martowska, Katarzyna; Gołuch, Dominik; Wrocławska-Warchala, Emilia
2017-10-01
Functional infrared thermal imaging is a method widely used in medicine, including analysis of the mechanisms related to the effect of emotions on physiological processes. The article shows how the body temperature may change during stress associated with performing a rhetorical task and proposes new parameters useful for dynamic thermal imaging measurements MATERIALS AND METHODS: 29 healthy male subjects were examined. They were given a rhetorical task that induced stress. Analysis and processing of collected body temperature data in a spatial resolution of 256×512pixels and a temperature resolution of 0.1°C enabled to show the dynamics of temperature changes. This analysis was preceded by dedicated image analysis and processing methods RESULTS: The presented dedicated algorithm for image analysis and processing allows for fully automated, reproducible and quantitative assessment of temperature changes and time constants in a sequence of thermal images of the patient. When performing the rhetorical task, the temperature rose by 0.47±0.19°C in 72.41% of the subjects, including 20.69% in whom the temperature decreased by 0.49±0.14°C after 237±141s. For 20.69% of the subjects only a drop in temperature was registered. For the remaining 6.89% of the cases, no temperature changes were registered CONCLUSIONS: The performance of the rhetorical task by the subjects causes body temperature changes. The ambiguous temperature response to the given stress factor indicates the complex mechanisms responsible for regulating stressful situations. Stress associated with the examination itself induces body temperature changes. These changes should always be taken into account in the analysis of infrared data. Copyright © 2017 Elsevier B.V. All rights reserved.
Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal
Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal
2013-01-01
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433
Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer: Data Sheets. Annexes
1999-03-10
Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 510920 UNCLASSIFIED 1111111111111111111111111111111111111111 TITLE ANNEXES TO: COGNITIVE TASK ANALYSIS OF...1999 2. REPORT TYPE 3. DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Annexes to: Cognitive Task Analysis of the HALIFAX-Class...by ANSI Std Z39-18 Guelph, Ontario .H U. M A N S X S T E M S Incorporated Annexes to: Cognitive Task Analysis of the HALIFAX-Class Operations
1976-03-01
This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is
Rostral and caudal prefrontal contribution to creativity: a meta-analysis of functional imaging data
Gonen-Yaacovi, Gil; de Souza, Leonardo Cruz; Levy, Richard; Urbanski, Marika; Josse, Goulven; Volle, Emmanuelle
2013-01-01
Creativity is of central importance for human civilization, yet its neurocognitive bases are poorly understood. The aim of the present study was to integrate existing functional imaging data by using the meta-analysis approach. We reviewed 34 functional imaging studies that reported activation foci during tasks assumed to engage creative thinking in healthy adults. A coordinate-based meta-analysis using Activation Likelihood Estimation (ALE) first showed a set of predominantly left-hemispheric regions shared by the various creativity tasks examined. These regions included the caudal lateral prefrontal cortex (PFC), the medial and lateral rostral PFC, and the inferior parietal and posterior temporal cortices. Further analyses showed that tasks involving the combination of remote information (combination tasks) activated more anterior areas of the lateral PFC than tasks involving the free generation of unusual responses (unusual generation tasks), although both types of tasks shared caudal prefrontal areas. In addition, verbal and non-verbal tasks involved the same regions in the left caudal prefrontal, temporal, and parietal areas, but also distinct domain-oriented areas. Taken together, these findings suggest that several frontal and parieto-temporal regions may support cognitive processes shared by diverse creativity tasks, and that some regions may be specialized for distinct types of processes. In particular, the lateral PFC appeared to be organized along a rostro-caudal axis, with rostral regions involved in combining ideas creatively and more posterior regions involved in freely generating novel ideas. PMID:23966927
A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.
Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D
2010-05-15
Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.
NASA Technical Reports Server (NTRS)
Kossakovski, D. A.; Bearman, G. H.; Kirschvink, J. L.
2000-01-01
A variety of in-situ planetary exploration tasks such as particulate analysis or life detection require a tool with a capability for combined imaging and chemical analysis with sub-micron spatial resolution.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Deep and Structured Robust Information Theoretic Learning for Image Analysis.
Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai
2016-07-07
This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.
Khan, Bilal; Chand, Pankaj; Alexandrakis, George
2011-01-01
Functional near infrared (fNIR) imaging was used to identify spatiotemporal relations between spatially distinct cortical regions activated during various hand and arm motion protocols. Imaging was performed over a field of view (FOV, 12 x 8.4 cm) including the secondary motor, primary sensorimotor, and the posterior parietal cortices over a single brain hemisphere. This is a more extended FOV than typically used in current fNIR studies. Three subjects performed four motor tasks that induced activation over this extended FOV. The tasks included card flipping (pronation and supination) that, to our knowledge, has not been performed in previous functional magnetic resonance imaging (fMRI) or fNIR studies. An earlier rise and a longer duration of the hemodynamic activation response were found in tasks requiring increased physical or mental effort. Additionally, analysis of activation images by cluster component analysis (CCA) demonstrated that cortical regions can be grouped into clusters, which can be adjacent or distant from each other, that have similar temporal activation patterns depending on whether the performed motor task is guided by visual or tactile feedback. These analyses highlight the future potential of fNIR imaging to tackle clinically relevant questions regarding the spatiotemporal relations between different sensorimotor cortex regions, e.g. ones involved in the rehabilitation response to motor impairments. PMID:22162826
Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate
2010-05-01
Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic
Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases
Janowczyk, Andrew; Madabhushi, Anant
2016-01-01
Background: Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific “handcrafted” features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. Aims: This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Results: Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images). Conclusion: This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data. PMID:27563488
Janowczyk, Andrew; Madabhushi, Anant
2016-01-01
Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images). This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Memory-Augmented Cellular Automata for Image Analysis.
1978-11-01
case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
A scene-analysis approach to remote sensing. [San Francisco, California
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.
1978-01-01
The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Yiu, Edwin M-L; Wang, Gaowu; Lo, Andy C Y; Chan, Karen M-K; Ma, Estella P-M; Kong, Jiangping; Barrett, Elizabeth Ann
2013-11-01
The present study aimed to determine whether there were physiological differences in the vocal fold vibration between nonfatigued and fatigued voices using high-speed laryngoscopic imaging and quantitative analysis. Twenty participants aged from 18 to 23 years (mean, 21.2 years; standard deviation, 1.3 years) with normal voice were recruited to participate in an extended singing task. Vocal fatigue was induced using a singing task. High-speed laryngoscopic image recordings of /i/ phonation were taken before and after the singing task. The laryngoscopic images were semiautomatically analyzed with the quantitative high-speed video processing program to extract indices related to the anteroposterior dimension (length), transverse dimension (width), and the speed of opening and closing. Significant reduction in the glottal length-to-width ratio index was found after vocal fatigue. Physiologically, this indicated either a significantly shorter (anteroposteriorly) or a wider (transversely) glottis after vocal fatigue. The high-speed imaging technique using quantitative analysis has the potential for early identification of vocally fatigued voice. Copyright © 2013 The Voice Foundation. All rights reserved.
GOATS Image Projection Component
NASA Technical Reports Server (NTRS)
Haber, Benjamin M.; Green, Joseph J.
2011-01-01
When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.
Cascaded systems analysis of noise and detectability in dual-energy cone-beam CT
Gang, Grace J.; Zbijewski, Wojciech; Webster Stayman, J.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: Dual-energy computed tomography and dual-energy cone-beam computed tomography (DE-CBCT) are promising modalities for applications ranging from vascular to breast, renal, hepatic, and musculoskeletal imaging. Accordingly, the optimization of imaging techniques for such applications would benefit significantly from a general theoretical description of image quality that properly incorporates factors of acquisition, reconstruction, and tissue decomposition in DE tomography. This work reports a cascaded systems analysis model that includes the Poisson statistics of x rays (quantum noise), detector model (flat-panel detectors), anatomical background, image reconstruction (filtered backprojection), DE decomposition (weighted subtraction), and simple observer models to yield a task-based framework for DE technique optimization. Methods: The theoretical framework extends previous modeling of DE projection radiography and CBCT. Signal and noise transfer characteristics are propagated through physical and mathematical stages of image formation and reconstruction. Dual-energy decomposition was modeled according to weighted subtraction of low- and high-energy images to yield the 3D DE noise-power spectrum (NPS) and noise-equivalent quanta (NEQ), which, in combination with observer models and the imaging task, yields the dual-energy detectability index (d′). Model calculations were validated with NPS and NEQ measurements from an experimental imaging bench simulating the geometry of a dedicated musculoskeletal extremities scanner. Imaging techniques, including kVp pair and dose allocation, were optimized using d′ as an objective function for three example imaging tasks: (1) kidney stone discrimination; (2) iodine vs bone in a uniform, soft-tissue background; and (3) soft tissue tumor detection on power-law anatomical background. Results: Theoretical calculations of DE NPS and NEQ demonstrated good agreement with experimental measurements over a broad range of imaging conditions. Optimization results suggest a lower fraction of total dose imparted by the low-energy acquisition, a finding consistent with previous literature. The selection of optimal kVp pair reveals the combined effect of both quantum noise and contrast in the kidney stone discrimination and soft-tissue tumor detection tasks, whereas the K-edge effect of iodine was the dominant factor in determining kVp pairs in the iodine vs bone task. The soft-tissue tumor task illustrated the benefit of dual-energy imaging in eliminating anatomical background noise and improving detectability beyond that achievable by single-energy scans. Conclusions: This work established a task-based theoretical framework that is predictive of DE image quality. The model can be utilized in optimizing a broad range of parameters in image acquisition, reconstruction, and decomposition, providing a useful tool for maximizing DE-CBCT image quality and reducing dose. PMID:22894440
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Image patch-based method for automated classification and detection of focal liver lesions on CT
NASA Astrophysics Data System (ADS)
Safdari, Mustafa; Pasari, Raghav; Rubin, Daniel; Greenspan, Hayit
2013-03-01
We developed a method for automated classification and detection of liver lesions in CT images based on image patch representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and detection. The methodology includes building a dictionary for a training set using local descriptors and representing a region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.
Using object-based image analysis to guide the selection of field sample locations
USDA-ARS?s Scientific Manuscript database
One of the most challenging tasks for resource management and research is designing field sampling schemes to achieve unbiased estimates of ecosystem parameters as efficiently as possible. This study focused on the potential of fine-scale image objects from object-based image analysis (OBIA) to be u...
Filtering and left ventricle segmentation of the fetal heart in ultrasound images
NASA Astrophysics Data System (ADS)
Vargas-Quintero, Lorena; Escalante-Ramírez, Boris
2013-11-01
In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.
Functional Heterogeneity and Convergence in the Right Temporoparietal Junction
Lee, Su Mei; McCarthy, Gregory
2016-01-01
The right temporoparietal junction (rTPJ) is engaged by tasks that manipulate biological motion processing, Theory of Mind attributions, and attention reorienting. The proximity of activations elicited by these tasks raises the question of whether these tasks share common cognitive component processes that are subserved by common neural substrates. Here, we used high-resolution whole-brain functional magnetic resonance imaging in a within-subjects design to determine whether these tasks activate common regions of the rTPJ. Each participant was presented with the 3 tasks in the same imaging session. In a whole-brain analysis, we found that only the right and left TPJs were activated by all 3 tasks. Multivoxel pattern analysis revealed that the regions of overlap could still discriminate the 3 tasks. Notably, we found significant cross-task classification in the right TPJ, which suggests a shared neural process between the 3 tasks. Taken together, these results support prior studies that have indicated functional heterogeneity within the rTPJ but also suggest a convergence of function within a region of overlap. These results also call for further investigation into the nature of the function subserved in this overlap region. PMID:25477367
Can Distributed Volunteers Accomplish Massive Data Analysis Tasks?
NASA Technical Reports Server (NTRS)
Kanefsky, B.; Barlow, N. G.; Gulick, V. C.
2001-01-01
We argue that many image analysis tasks can be performed by distributed amateurs. Our pilot study, with crater surveying and classification, has produced encouraging results in terms of both quantity (100,000 crater entries in 2 months) and quality. Additional information is contained in the original extended abstract.
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
Encoding processes during retrieval tasks.
Buckner, R L; Wheeler, M E; Sheridan, M A
2001-04-01
Episodic memory encoding is pervasive across many kinds of task and often arises as a secondary processing effect in tasks that do not require intentional memorization. To illustrate the pervasive nature of information processing that leads to episodic encoding, a form of incidental encoding was explored based on the "Testing" phenomenon: The incidental-encoding task was an episodic memory retrieval task. Behavioral data showed that performing a memory retrieval task was as effective as intentional instructions at promoting episodic encoding. During fMRI imaging, subjects viewed old and new words and indicated whether they remembered them. Relevant to encoding, the fate of the new words was examined using a second, surprise test of recognition after the imaging session. fMRI analysis of those new words that were later remembered revealed greater activity in left frontal regions than those that were later forgotten - the same pattern of results as previously observed for traditional incidental and intentional episodic encoding tasks. This finding may offer a partial explanation for why repeated testing improves memory performance. Furthermore, the observation of correlates of episodic memory encoding during retrieval tasks challenges some interpretations that arise from direct comparisons between "encoding tasks" and "retrieval tasks" in imaging data. Encoding processes and their neural correlates may arise in many tasks, even those nominally labeled as retrieval tasks by the experimenter.
Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Deuschl, G; Raethjen, J; Heute, U; Muthuraman, M
2012-01-01
Directionality analysis of signals originating from different parts of brain during motor tasks has gained a lot of interest. Since brain activity can be recorded over time, methods of time series analysis can be applied to medical time series as well. Granger Causality is a method to find a causal relationship between time series. Such causality can be referred to as a directional connection and is not necessarily bidirectional. The aim of this study is to differentiate between different motor tasks on the basis of activation maps and also to understand the nature of connections present between different parts of the brain. In this paper, three different motor tasks (finger tapping, simple finger sequencing, and complex finger sequencing) are analyzed. Time series for each task were extracted from functional magnetic resonance imaging (fMRI) data, which have a very good spatial resolution and can look into the sub-cortical regions of the brain. Activation maps based on fMRI images show that, in case of complex finger sequencing, most parts of the brain are active, unlike finger tapping during which only limited regions show activity. Directionality analysis on time series extracted from contralateral motor cortex (CMC), supplementary motor area (SMA), and cerebellum (CER) show bidirectional connections between these parts of the brain. In case of simple finger sequencing and complex finger sequencing, the strongest connections originate from SMA and CMC, while connections originating from CER in either direction are the weakest ones in magnitude during all paradigms.
Should I Stop or Should I Go? The Role of Associations and Expectancies
2015-01-01
Following exposure to consistent stimulus–stop mappings, response inhibition can become automatized with practice. What is learned is less clear, even though this has important theoretical and practical implications. A recent analysis indicates that stimuli can become associated with a stop signal or with a stop goal. Furthermore, expectancy may play an important role. Previous studies that have used stop or no-go signals to manipulate stimulus–stop learning cannot distinguish between stimulus-signal and stimulus-goal associations, and expectancy has not been measured properly. In the present study, participants performed a task that combined features of the go/no-go task and the stop-signal task in which the stop-signal rule changed at the beginning of each block. The go and stop signals were superimposed over 40 task-irrelevant images. Our results show that participants can learn direct associations between images and the stop goal without mediation via the stop signal. Exposure to the image-stop associations influenced task performance during training, and expectancies measured following task completion or measured within the task. But, despite this, we found an effect of stimulus–stop learning on test performance only when the task increased the task-relevance of the images. This could indicate that the influence of stimulus–stop learning on go performance is strongly influenced by attention to both task-relevant and task-irrelevant stimulus features. More generally, our findings suggest a strong interplay between automatic and controlled processes. PMID:26322688
TWave: High-Order Analysis of Functional MRI
Barnathan, Michael; Megalooikonomou, Vasileios; Faloutsos, Christos; Faro, Scott; Mohamed, Feroze B.
2011-01-01
The traditional approach to functional image analysis models images as matrices of raw voxel intensity values. Although such a representation is widely utilized and heavily entrenched both within neuroimaging and in the wider data mining community, the strong interactions among space, time, and categorical modes such as subject and experimental task inherent in functional imaging yield a dataset with “high-order” structure, which matrix models are incapable of exploiting. Reasoning across all of these modes of data concurrently requires a high-order model capable of representing relationships between all modes of the data in tandem. We thus propose to model functional MRI data using tensors, which are high-order generalizations of matrices equivalent to multidimensional arrays or data cubes. However, several unique challenges exist in the high-order analysis of functional medical data: naïve tensor models are incapable of exploiting spatiotemporal locality patterns, standard tensor analysis techniques exhibit poor efficiency, and mixtures of numeric and categorical modes of data are very often present in neuroimaging experiments. Formulating the problem of image clustering as a form of Latent Semantic Analysis and using the WaveCluster algorithm as a baseline, we propose a comprehensive hybrid tensor and wavelet framework for clustering, concept discovery, and compression of functional medical images which successfully addresses these challenges. Our approach reduced runtime and dataset size on a 9.3 GB finger opposition motor task fMRI dataset by up to 98% while exhibiting improved spatiotemporal coherence relative to standard tensor, wavelet, and voxel-based approaches. Our clustering technique was capable of automatically differentiating between the frontal areas of the brain responsible for task-related habituation and the motor regions responsible for executing the motor task, in contrast to a widely used fMRI analysis program, SPM, which only detected the latter region. Furthermore, our approach discovered latent concepts suggestive of subject handedness nearly 100x faster than standard approaches. These results suggest that a high-order model is an integral component to accurate scalable functional neuroimaging. PMID:21729758
A Factor Analysis of Learning Data and Selected Ability Test Scores
ERIC Educational Resources Information Center
Jones, Dorothy L.
1976-01-01
A verbal concept-learning task permitting the externalizing and quantifying of learning behavior and 16 ability tests were administered to female graduate students. Data were analyzed by alpha factor analysis and incomplete image analysis. Six alpha factors and 12 image factors were extracted and orthogonally rotated. Four areas of cognitive…
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online
NASA Astrophysics Data System (ADS)
Romano, C.; Graff, P. V.; Runco, S.
2017-12-01
Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online?Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image.Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project:• Concise explanation of the project, its context, and its purpose;• Including a mention of the funding agency (in this case, NASA);• A preview of the specific tasks required of participants;• A dedicated user interface for the actual citizen science interaction.In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.
Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online
NASA Technical Reports Server (NTRS)
Romano, Cia; Graff, Paige V.; Runco, Susan
2017-01-01
Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online? Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image. Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project: (1) Concise explanation of the project, its context, and its purpose; (2) Including a mention of the funding agency (in this case, NASA); (3) A preview of the specific tasks required of participants; (4) A dedicated user interface for the actual citizen science interaction. In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.
Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis
Garrison, Kathleen A.; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J.; Aziz-Zadeh, Lisa S.
2015-01-01
Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design. PMID:26441816
1975-08-01
image analysis and processing tasks such as information extraction, image enhancement and restoration, coding, etc. The ultimate objective of this research is to form a basis for the development of technology relevant to military applications of machine extraction of information from aircraft and satellite imagery of the earth’s surface. This report discusses research activities during the three month period February 1 - April 30,
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
NASA Astrophysics Data System (ADS)
Pinti, Paola; Cardone, Daniela; Merla, Arcangelo
2015-12-01
Functional Near Infrared-Spectroscopy (fNIRS) represents a powerful tool to non-invasively study task-evoked brain activity. fNIRS assessment of cortical activity may suffer for contamination by physiological noises of different origin (e.g. heart beat, respiration, blood pressure, skin blood flow), both task-evoked and spontaneous. Spontaneous changes occur at different time scales and, even if they are not directly elicited by tasks, their amplitude may result task-modulated. In this study, concentration changes of hemoglobin were recorded over the prefrontal cortex while simultaneously recording the facial temperature variations of the participants through functional infrared thermal (fIR) imaging. fIR imaging provides touch-less estimation of the thermal expression of peripheral autonomic. Wavelet analysis revealed task-modulation of the very low frequency (VLF) components of both fNIRS and fIR signals and strong coherence between them. Our results indicate that subjective cognitive and autonomic activities are intimately linked and that the VLF component of the fNIRS signal is affected by the autonomic activity elicited by the cognitive task. Moreover, we showed that task-modulated changes in vascular tone occur both at a superficial and at larger depth in the brain. Combined use of fNIRS and fIR imaging can effectively quantify the impact of VLF autonomic activity on the fNIRS signals.
Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko
2018-03-01
Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.
Deep learning for tumor classification in imaging mass spectrometry.
Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter
2018-04-01
Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
NASA Astrophysics Data System (ADS)
Platisa, Ljiljana; Vansteenkiste, Ewout; Goossens, Bart; Marchessoux, Cédric; Kimpe, Tom; Philips, Wilfried
2009-02-01
Medical-imaging systems are designed to aid medical specialists in a specific task. Therefore, the physical parameters of a system need to optimize the task performance of a human observer. This requires measurements of human performance in a given task during the system optimization. Typically, psychophysical studies are conducted for this purpose. Numerical observer models have been successfully used to predict human performance in several detection tasks. Especially, the task of signal detection using a channelized Hotelling observer (CHO) in simulated images has been widely explored. However, there are few studies done for clinically acquired images that also contain anatomic noise. In this paper, we investigate the performance of a CHO in the task of detecting lung nodules in real radiographic images of the chest. To evaluate variability introduced by the limited available data, we employ a commonly used study of a multi-reader multi-case (MRMC) scenario. It accounts for both case and reader variability. Finally, we use the "oneshot" methods to estimate the MRMC variance of the area under the ROC curve (AUC). The obtained AUC compares well to those reported for human observer study on a similar data set. Furthermore, the "one-shot" analysis implies a fairly consistent performance of the CHO with the variance of AUC below 0.002. This indicates promising potential for numerical observers in optimization of medical imaging displays and encourages further investigation on the subject.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images
Korbar, Bruno; Olofson, Andrea M.; Miraflor, Allen P.; Nicka, Catherine M.; Suriawinata, Matthew A.; Torresani, Lorenzo; Suriawinata, Arief A.; Hassanpour, Saeed
2017-01-01
Context: Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. Aims: We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Setting and Design: Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Subjects and Methods: Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. Statistical Analysis: We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Results: Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%–95.9%). Conclusions: Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations. PMID:28828201
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Effects of task and image properties on visual-attention deployment in image-quality assessment
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid
2015-03-01
It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.
Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2017-05-03
Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.
Random forest regression for magnetic resonance image synthesis.
Jog, Amod; Carass, Aaron; Roy, Snehashis; Pham, Dzung L; Prince, Jerry L
2017-01-01
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T 2 -weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T 2 -weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets. Copyright © 2016 Elsevier B.V. All rights reserved.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
NASA Astrophysics Data System (ADS)
Ikejimba, Lynda; Kiarashi, Nooshin; Lin, Yuan; Chen, Baiyu; Ghate, Sujata V.; Zerhouni, Moustafa; Samei, Ehsan; Lo, Joseph Y.
2012-03-01
Digital breast tomosynthesis (DBT) is a novel x-ray imaging technique that provides 3D structural information of the breast. In contrast to 2D mammography, DBT minimizes tissue overlap potentially improving cancer detection and reducing number of unnecessary recalls. The addition of a contrast agent to DBT and mammography for lesion enhancement has the benefit of providing functional information of a lesion, as lesion contrast uptake and washout patterns may help differentiate between benign and malignant tumors. This study used a task-based method to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: contrast enhanced mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Imaging performance was characterized using a detectability index d', derived from the system task transfer function (TTF), an imaging task, iodine contrast, and the noise power spectrum (NPS). The task modeled a 5 mm lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d' was generated as a function of dose and iodine concentration. In general, higher dose gave higher d', but for the lowest iodine concentration and lowest dose, dual energy subtraction tomosynthesis and temporal subtraction tomosynthesis demonstrated the highest performance.
Study on user interface of pathology picture archiving and communication system.
Kim, Dasueran; Kang, Peter; Yun, Jungmin; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom
2014-01-01
It is necessary to improve the pathology workflow. A workflow task analysis was performed using a pathology picture archiving and communication system (pathology PACS) in order to propose a user interface for the Pathology PACS considering user experience. An interface analysis of the Pathology PACS in Seoul National University Hospital and a task analysis of the pathology workflow were performed by observing recorded video. Based on obtained results, a user interface for the Pathology PACS was proposed. Hierarchical task analysis of Pathology PACS was classified into 17 tasks including 1) pre-operation, 2) text, 3) images, 4) medical record viewer, 5) screen transition, 6) pathology identification number input, 7) admission date input, 8) diagnosis doctor, 9) diagnosis code, 10) diagnosis, 11) pathology identification number check box, 12) presence or absence of images, 13) search, 14) clear, 15) Excel save, 16) search results, and 17) re-search. And frequently used menu items were identified and schematized. A user interface for the Pathology PACS considering user experience could be proposed as a preliminary step, and this study may contribute to the development of medical information systems based on user experience and usability.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
How do we watch images? A case of change detection and quality estimation
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte
2012-01-01
The most common tasks in subjective image estimation are change detection (a detection task) and image quality estimation (a preference task). We examined how the task influences the gaze behavior when comparing detection and preference tasks. The eye movements of 16 naïve observers were recorded with 8 observers in both tasks. The setting was a flicker paradigm, where the observers see a non-manipulated image, a manipulated version of the image and again the non-manipulated image and estimate the difference they perceived in them. The material was photographic material with different image distortions and contents. To examine the spatial distribution of fixations, we defined the regions of interest using a memory task and calculated information entropy to estimate how concentrated the fixations were on the image plane. The quality task was faster and needed fewer fixations and the first eight fixations were more concentrated on certain image areas than the change detection task. The bottom-up influences of the image also caused more variation to the gaze behavior in the quality estimation task than in the change detection task The results show that the quality estimation is faster and the regions of interest are emphasized more on certain images compared with the change detection task that is a scan task where the whole image is always thoroughly examined. In conclusion, in subjective image estimation studies it is important to think about the task.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Cell nuclei and cytoplasm joint segmentation using the sliding band filter.
Quelhas, Pedro; Marcuzzo, Monica; Mendonça, Ana Maria; Campilho, Aurélio
2010-08-01
Microscopy cell image analysis is a fundamental tool for biological research. In particular, multivariate fluorescence microscopy is used to observe different aspects of cells in cultures. It is still common practice to perform analysis tasks by visual inspection of individual cells which is time consuming, exhausting and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cell cultures. Traditionally the task of automatic cell analysis is approached through the use of image segmentation methods for extraction of cells' locations and shapes. Image segmentation, although fundamental, is neither an easy task in computer vision nor is it robust to image quality changes. This makes image segmentation for cell detection semi-automated requiring frequent tuning of parameters. We introduce a new approach for cell detection and shape estimation in multivariate images based on the sliding band filter (SBF). This filter's design makes it adequate to detect overall convex shapes and as such it performs well for cell detection. Furthermore, the parameters involved are intuitive as they are directly related to the expected cell size. Using the SBF filter we detect cells' nucleus and cytoplasm location and shapes. Based on the assumption that each cell has the same approximate shape center in both nuclei and cytoplasm fluorescence channels, we guide cytoplasm shape estimation by the nuclear detections improving performance and reducing errors. Then we validate cell detection by gathering evidence from nuclei and cytoplasm channels. Additionally, we include overlap correction and shape regularization steps which further improve the estimated cell shapes. The approach is evaluated using two datasets with different types of data: a 20 images benchmark set of simulated cell culture images, containing 1000 simulated cells; a 16 images Drosophila melanogaster Kc167 dataset containing 1255 cells, stained for DNA and actin. Both image datasets present a difficult problem due to the high variability of cell shapes and frequent cluster overlap between cells. On the Drosophila dataset our approach achieved a precision/recall of 95%/69% and 82%/90% for nuclei and cytoplasm detection respectively and an overall accuracy of 76%.
Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D
2007-02-01
Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.
Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John
2015-11-01
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Geiser, William; Heintz, Philip
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist ismore » responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.« less
Wu, Xia; Yu, Xinyu; Yao, Li; Li, Rui
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have converged to reveal the default mode network (DMN), a constellation of regions that display co-activation during resting-state but co-deactivation during attention-demanding tasks in the brain. Here, we employed a Bayesian network (BN) analysis method to construct a directed effective connectivity model of the DMN and compared the organizational architecture and interregional directed connections under both resting-state and task-state. The analysis results indicated that the DMN was consistently organized into two closely interacting subsystems in both resting-state and task-state. The directed connections between DMN regions, however, changed significantly from the resting-state to task-state condition. The results suggest that the DMN intrinsically maintains a relatively stable structure whether at rest or performing tasks but has different information processing mechanisms under varied states. PMID:25309414
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
Anatomical background and generalized detectability in tomosynthesis and cone-beam CT.
Gang, G J; Tward, D J; Lee, J; Siewerdsen, J H
2010-05-01
Anatomical background presents a major impediment to detectability in 2D radiography as well as 3D tomosynthesis and cone-beam CT (CBCT). This article incorporates theoretical and experimental analysis of anatomical background "noise" in cascaded systems analysis of 2D and 3D imaging performance to yield "generalized" metrics of noise-equivalent quanta (NEQ) and detectability index as a function of the orbital extent of the (circular arc) source-detector orbit. A physical phantom was designed based on principles of fractal self-similarity to exhibit power-law spectral density (kappa/Fbeta) comparable to various anatomical sites (e.g., breast and lung). Background power spectra [S(B)(F)] were computed as a function of source-detector orbital extent, including tomosynthesis (approximately 10 degrees -180 degrees) and CBCT (180 degrees + fan to 360 degrees) under two acquisition schemes: (1) Constant angular separation between projections (variable dose) and (2) constant total number of projections (constant dose). The resulting S(B) was incorporated in the generalized NEQ, and detectability index was computed from 3D cascaded systems analysis for a variety of imaging tasks. The phantom yielded power-law spectra within the expected spatial frequency range, quantifying the dependence of clutter magnitude (kappa) and correlation (beta) with increasing tomosynthesis angle. Incorporation of S(B) in the 3D NEQ provided a useful framework for analyzing the tradeoffs among anatomical, quantum, and electronic noise with dose and orbital extent. Distinct implications are posed for breast and chest tomosynthesis imaging system design-applications varying significantly in kappa and beta, and imaging task and, therefore, in optimal selection of orbital extent, number of projections, and dose. For example, low-frequency tasks (e.g., soft-tissue masses or nodules) tend to benefit from larger orbital extent and more fully 3D tomographic imaging, whereas high-frequency tasks (e.g., microcalcifications) require careful, application-specific selection of orbital extent and number of projections to minimize negative effects of quantum and electronic noise. The complex tradeoffs among anatomical background, quantum noise, and electronic noise in projection imaging, tomosynthesis, and CBCT can be described by generalized cascaded systems analysis, providing a useful framework for system design and optimization.
2013-05-01
contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY
Medical Image Analysis by Cognitive Information Systems - a Review.
Ogiela, Lidia; Takizawa, Makoto
2016-10-01
This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.
Methods for the analysis of ordinal response data in medical image quality assessment.
Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G
2016-07-01
The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.
Identification of the Properties of Gum Arabic Used as a Binder in 7.62-mm Ammunition Primers
2010-06-01
Solution - LCC Testing (ATK Task 700) 51 Cartridge - Ballistic Testing (ATK Task 800) 51 ATK Elemental Analysis 52 Moisture Loss and Friability...Hummel sample 7 3 SDT summary for Quadra sample 8 4 Particle size analysis summary for gum arabic samples 9 5 SEM images of Colony gum arabic at 230x...strengths 21 16 Color analysis : Colony after 5.0 hrs 23 17 Color analysis : Hummel after 5.0 hrs 23 18 Color analysis : Brenntag after 5.0 hrs 23 19 Gel
X-Ray Imaging Applied to Problems in Planetary Materials
NASA Technical Reports Server (NTRS)
Jurewicz, A. J. G.; Mih, D. T.; Jones, S. M.; Connolly, H.
2000-01-01
Real-time radiography (X-ray imaging) can be a useful tool for tasks such as (1) the non-destructive, preliminary examination of opaque samples and (2) optimizing how to section opaque samples for more traditional microscopy and chemical analysis.
Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset.
Shin, Jaeyoung; von Lühmann, Alexander; Kim, Do-Won; Mehnert, Jan; Hwang, Han-Jeong; Müller, Klaus-Robert
2018-02-13
We provide an open access multimodal brain-imaging dataset of simultaneous electroencephalography (EEG) and near-infrared spectroscopy (NIRS) recordings. Twenty-six healthy participants performed three cognitive tasks: 1) n-back (0-, 2- and 3-back), 2) discrimination/selection response task (DSR) and 3) word generation (WG) tasks. The data provided includes: 1) measured data, 2) demographic data, and 3) basic analysis results. For n-back (dataset A) and DSR tasks (dataset B), event-related potential (ERP) analysis was performed, and spatiotemporal characteristics and classification results for 'target' versus 'non-target' (dataset A) and symbol 'O' versus symbol 'X' (dataset B) are provided. Time-frequency analysis was performed to show the EEG spectral power to differentiate the task-relevant activations. Spatiotemporal characteristics of hemodynamic responses are also shown. For the WG task (dataset C), the EEG spectral power and spatiotemporal characteristics of hemodynamic responses are analyzed, and the potential merit of hybrid EEG-NIRS BCIs was validated with respect to classification accuracy. We expect that the dataset provided will facilitate performance evaluation and comparison of many neuroimaging analysis techniques.
Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset
Shin, Jaeyoung; von Lühmann, Alexander; Kim, Do-Won; Mehnert, Jan; Hwang, Han-Jeong; Müller, Klaus-Robert
2018-01-01
We provide an open access multimodal brain-imaging dataset of simultaneous electroencephalography (EEG) and near-infrared spectroscopy (NIRS) recordings. Twenty-six healthy participants performed three cognitive tasks: 1) n-back (0-, 2- and 3-back), 2) discrimination/selection response task (DSR) and 3) word generation (WG) tasks. The data provided includes: 1) measured data, 2) demographic data, and 3) basic analysis results. For n-back (dataset A) and DSR tasks (dataset B), event-related potential (ERP) analysis was performed, and spatiotemporal characteristics and classification results for ‘target’ versus ‘non-target’ (dataset A) and symbol ‘O’ versus symbol ‘X’ (dataset B) are provided. Time-frequency analysis was performed to show the EEG spectral power to differentiate the task-relevant activations. Spatiotemporal characteristics of hemodynamic responses are also shown. For the WG task (dataset C), the EEG spectral power and spatiotemporal characteristics of hemodynamic responses are analyzed, and the potential merit of hybrid EEG-NIRS BCIs was validated with respect to classification accuracy. We expect that the dataset provided will facilitate performance evaluation and comparison of many neuroimaging analysis techniques. PMID:29437166
NASA Astrophysics Data System (ADS)
Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R.; Tulchin-Francis, Kirsten; Shierk, Angela; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George
2013-03-01
Functional neurological imaging has been shown to be valuable in evaluating brain plasticity in children with cerebral palsy (CP). In recent studies it has been demonstrated that functional near-infrared spectroscopy (fNIRS) is a viable and sensitive method for imaging motor cortex activities in children with CP. However, during unilateral finger tapping tasks children with CP often exhibit mirror motions (unintended motions in the non-tapping hand), and current fNIRS image formation techniques do not account for this. Therefore, the resulting fNIRS images contain activation from intended and unintended motions. In this study, cortical activity was mapped with fNIRS on four children with CP and five controls during a finger tapping task. Finger motion and arm muscle activation were concurrently measured using motion tracking cameras and electromyography (EMG). Subject-specific regressors were created from motion capture and EMG data and used in a general linear model (GLM) analysis in an attempt to create fNIRS images representative of different motions. The analysis provided an fNIRS image representing activation due to motion and muscle activity for each hand. This method could prove to be valuable in monitoring brain plasticity in children with CP by providing more consistent images between measurements. Additionally, muscle effort versus cortical effort was compared between control and CP subjects. More cortical effort was required to produce similar muscle effort in children with CP. It is possible this metric could be a valuable diagnostic tool in determining response to treatment.
Earth resources data analysis program, phase 3
NASA Technical Reports Server (NTRS)
1975-01-01
Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
Chen, Y C; Huang, F D; Chen, N H; Shou, J Y; Wu, L
1998-04-01
In the last 2-3 decades the role of the premotor cortex (PM) of monkey in memorized spatial sequential (MSS) movements has been amply investigated. However, it is as yet not known whether PM participates in the movement sequence behaviour guided by recognition of visual figures (i.e. the figure-recognition sequence, FRS). In the present work three monkeys were trained to perform both FRS and MSS tasks. Postmortem examination showed that 202 cells were in the dorso-lateral premotor cortex. Among 111 cells recorded during the two tasks, more than 50% changed their activity during the cue periods in either task. During the response period, the ratios of cells with changes of firing rate in both FRS and MSS were high and roughly equal to each other, while during the image period, the proportion in the FRS (83.7%) was significantly higher than that in the MSS (66.7%). Comparison of neuronal activities during same motor sequence of two different tasks showed that during the image periods PM neuronal activities were more closely related to the FRS task, while during the cue periods no difference could be found. Analysis of cell responses showed that the neurons with longer latency were much more in MSS than in FRS in either cue or image period. The present results indicate that the premotor cortex participates in FRS motor sequence as well as in MSS and suggest that the dorso-lateral PM represents another subarea in function shared by both FRS and MSS tasks. However, in view of the differences of PM neuronal responses in cue or image periods of FRS and MSS tasks, it seems likely that neural networks involved in FRS and MSS tasks are different.
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik
2010-10-01
Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.
Fine-grained recognition of plants from images.
Šulc, Milan; Matas, Jiří
2017-01-01
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Mastcam Stereo Analysis and Mosaics (MSAM)
NASA Astrophysics Data System (ADS)
Deen, R. G.; Maki, J. N.; Algermissen, S. S.; Abarca, H. E.; Ruoff, N. A.
2017-06-01
Describes a new PDART task that will generate stereo analysis products (XYZ, slope, etc.), terrain meshes, and mosaics (stereo, ortho, and Mast/Nav combos) for all MSL Mastcam images and deliver the results to PDS.
Proceedings of the NASA Workshop on Registration and Rectification
NASA Technical Reports Server (NTRS)
Bryant, N. A. (Editor)
1982-01-01
Issues associated with the registration and rectification of remotely sensed data. Near and long range applications research tasks and some medium range technology augmentation research areas are recommended. Image sharpness, feature extraction, inter-image mapping, error analysis, and verification methods are addressed.
ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.
Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W
2017-02-15
ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Lin, Zi-Jing; Li, Lin; Cazzell, Marry; Liu, Hanli
2013-03-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive imaging technique which measures the hemodynamic changes that reflect the brain activity. Diffuse optical tomography (DOT), a variant of fNIRS with multi-channel NIRS measurements, has demonstrated capability of three dimensional (3D) reconstructions of hemodynamic changes due to the brain activity. Conventional method of DOT image analysis to define the brain activation is based upon the paired t-test between two different states, such as resting-state versus task-state. However, it has limitation because the selection of activation and post-activation period is relatively subjective. General linear model (GLM) based analysis can overcome this limitation. In this study, we combine the 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with the risk-decision making process. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The balloon analogue risk task (BART) is a valid experimental model and has been commonly used in behavioral measures to assess human risk taking action and tendency while facing risks. We have utilized the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making. Voxel-wise GLM analysis was performed on 18human participants (10 males and 8females).In this work, we wish to demonstrate the feasibility of using voxel-wise GLM analysis to image and study cognitive functions in response to risk decision making by DOT. Results have shown significant changes in the dorsal lateral prefrontal cortex (DLPFC) during the active choice mode and a different hemodynamic pattern between genders, which are in good agreements with published literatures in functional magnetic resonance imaging (fMRI) and fNIRS studies.
New public dataset for spotting patterns in medieval document images
NASA Astrophysics Data System (ADS)
En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent
2017-01-01
With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.
Lee, Matthew H; Schemmel, Andrew J; Pooler, B Dustin; Hanley, Taylor; Kennedy, Tabassum A; Field, Aaron S; Wiegmann, Douglas; Yu, John-Paul J
To assess the impact of separate non-image interpretive task and image-interpretive task workflows in an academic neuroradiology practice. A prospective, randomized, observational investigation of a centralized academic neuroradiology reading room was performed. The primary reading room fellow was observed over a one-month period using a time-and-motion methodology, recording frequency and duration of tasks performed. Tasks were categorized into separate image interpretive and non-image interpretive workflows. Post-intervention observation of the primary fellow was repeated following the implementation of a consult assistant responsible for non-image interpretive tasks. Pre- and post-intervention data were compared. Following separation of image-interpretive and non-image interpretive workflows, time spent on image-interpretive tasks by the primary fellow increased from 53.8% to 73.2% while non-image interpretive tasks decreased from 20.4% to 4.4%. Mean time duration of image interpretation nearly doubled, from 05:44 to 11:01 (p = 0.002). Decreases in specific non-image interpretive tasks, including phone calls/paging (2.86/hr versus 0.80/hr), in-room consultations (1.36/hr versus 0.80/hr), and protocoling (0.99/hr versus 0.10/hr), were observed. The consult assistant experienced 29.4 task switching events per hour. Rates of specific non-image interpretive tasks for the CA were 6.41/hr for phone calls/paging, 3.60/hr for in-room consultations, and 3.83/hr for protocoling. Separating responsibilities into NIT and IIT workflows substantially increased image interpretation time and decreased TSEs for the primary fellow. Consolidation of NITs into a separate workflow may allow for more efficient task completion. Copyright © 2017 Elsevier Inc. All rights reserved.
SUPERFUND REMOTE SENSING SUPPORT
This task provides remote sensing technical support to the Superfund program. Support includes the collection, processing, and analysis of remote sensing data to characterize hazardous waste disposal sites and their history. Image analysis reports, aerial photographs, and assoc...
Application development environment for advanced digital workstations
NASA Astrophysics Data System (ADS)
Valentino, Daniel J.; Harreld, Michael R.; Liu, Brent J.; Brown, Matthew S.; Huang, Lu J.
1998-06-01
One remaining barrier to the clinical acceptance of electronic imaging and information systems is the difficulty in providing intuitive access to the information needed for a specific clinical task (such as reaching a diagnosis or tracking clinical progress). The purpose of this research was to create a development environment that enables the design and implementation of advanced digital imaging workstations. We used formal data and process modeling to identify the diagnostic and quantitative data that radiologists use and the tasks that they typically perform to make clinical decisions. We studied a diverse range of radiology applications, including diagnostic neuroradiology in an academic medical center, pediatric radiology in a children's hospital, screening mammography in a breast cancer center, and thoracic radiology consultation for an oncology clinic. We used object- oriented analysis to develop software toolkits that enable a programmer to rapidly implement applications that closely match clinical tasks. The toolkits support browsing patient information, integrating patient images and reports, manipulating images, and making quantitative measurements on images. Collectively, we refer to these toolkits as the UCLA Digital ViewBox toolkit (ViewBox/Tk). We used the ViewBox/Tk to rapidly prototype and develop a number of diverse medical imaging applications. Our task-based toolkit approach enabled rapid and iterative prototyping of workstations that matched clinical tasks. The toolkit functionality and performance provided a 'hands-on' feeling for manipulating images, and for accessing textual information and reports. The toolkits directly support a new concept for protocol based-reading of diagnostic studies. The design supports the implementation of network-based application services (e.g., prefetching, workflow management, and post-processing) that will facilitate the development of future clinical applications.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Earth mapping - aerial or satellite imagery comparative analysis
NASA Astrophysics Data System (ADS)
Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo
Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images.
Korbar, Bruno; Olofson, Andrea M; Miraflor, Allen P; Nicka, Catherine M; Suriawinata, Matthew A; Torresani, Lorenzo; Suriawinata, Arief A; Hassanpour, Saeed
2017-01-01
Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%). Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.
Plaie, Thierry; Thomas, Delphine
2008-06-01
Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Soto, Fabian A.; Waldschmidt, Jennifer G.; Helie, Sebastien; Ashby, F. Gregory
2013-01-01
Previous evidence suggests that relatively separate neural networks underlie initial learning of rule-based and information-integration categorization tasks. With the development of automaticity, categorization behavior in both tasks becomes increasingly similar and exclusively related to activity in cortical regions. The present study uses multi-voxel pattern analysis to directly compare the development of automaticity in different categorization tasks. Each of three groups of participants received extensive training in a different categorization task: either an information-integration task, or one of two rule-based tasks. Four training sessions were performed inside an MRI scanner. Three different analyses were performed on the imaging data from a number of regions of interest (ROIs). The common patterns analysis had the goal of revealing ROIs with similar patterns of activation across tasks. The unique patterns analysis had the goal of revealing ROIs with dissimilar patterns of activation across tasks. The representational similarity analysis aimed at exploring (1) the similarity of category representations across ROIs and (2) how those patterns of similarities compared across tasks. The results showed that common patterns of activation were present in motor areas and basal ganglia early in training, but only in the former later on. Unique patterns were found in a variety of cortical and subcortical areas early in training, but they were dramatically reduced with training. Finally, patterns of representational similarity between brain regions became increasingly similar across tasks with the development of automaticity. PMID:23333700
Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R
2014-10-03
Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image analysis algorithms with an interactive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.
Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel Anne
Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.
Image analysis of multiple moving wood pieces in real time
NASA Astrophysics Data System (ADS)
Wang, Weixing
2006-02-01
This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.
Attention and emotion: an ERP analysis of facilitated emotional stimulus processing.
Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2003-06-11
Recent event-related potential studies observed an early posterior negativity (EPN) reflecting facilitated processing of emotional images. The present study explored if the facilitated processing of emotional pictures is sustained while subjects perform an explicit non-emotional attention task. EEG was recorded from 129 channels while subjects viewed a rapid continuous stream of images containing emotional pictures as well as task-related checkerboard images. As expected, explicit selective attention to target images elicited large P3 waves. Interestingly, emotional stimuli guided stimulus-driven selective encoding as reflected by augmented EPN amplitudes to emotional stimuli, in particular to stimuli of evolutionary significance (erotic contents, mutilations, and threat). These data demonstrate the selective encoding of emotional stimuli while top-down attentional control was directed towards non-emotional target stimuli.
Practical quantification of necrosis in histological whole-slide images.
Homeyer, André; Schenk, Andrea; Arlt, Janine; Dahmen, Uta; Dirsch, Olaf; Hahn, Horst K
2013-06-01
Since the histological quantification of necrosis is a common task in medical research and practice, we evaluate different image analysis methods for quantifying necrosis in whole-slide images. In a practical usage scenario, we assess the impact of different classification algorithms and feature sets on both accuracy and computation time. We show how a well-chosen combination of multiresolution features and an efficient postprocessing step enables the accurate quantification necrosis in gigapixel images in less than a minute. The results are general enough to be applied to other areas of histological image analysis as well. Copyright © 2013 Elsevier Ltd. All rights reserved.
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.
Li, Yuexiang; Shen, Linlin
2018-02-11
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.
Visual Search with Image Modification in Age-Related Macular Degeneration
Wiecek, Emily; Jackson, Mary Lou; Dakin, Steven C.; Bex, Peter
2012-01-01
Purpose. AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients' eye movements as a quantitative and functional measure of the efficacy of image modification. Methods. Eye movements of 17 patients (mean age = 77 years) with AMD were recorded while they searched for target objects in natural images. Eight different image modification methods were implemented and included manipulations of local image or edge contrast, color, and crowding. In a subsequent task, patients ranked their preference of the image modifications. Results. Within individual participants, there was no significant difference in search duration or accuracy across eight different image manipulations. When data were collapsed across all image modifications, a multivariate model identified six significant predictors for normalized search duration including scotoma size and acuity, as well as interactions among scotoma size, age, acuity, and contrast (P < 0.05). Additionally, an analysis of image statistics showed no correlation with search performance across all image modifications. Rank ordering of enhancement methods based on participants' preference revealed a trend that participants preferred the least modified images (P < 0.05). Conclusions. There was no quantitative effect of image modification on search performance. A better understanding of low- and high-level components of visual search in natural scenes is necessary to improve future attempts at image enhancement for low vision patients. Different search tasks may require alternative image modifications to improve patient functioning and performance. PMID:22930725
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-03-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-02-11
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-01-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794
Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Schubert, Till; Wenzel, Susanne; Roscher, Ribana; Stachniss, Cyrill
2016-06-01
The detection of traces is a main task of forensics. Hyperspectral imaging is a potential method from which we expect to capture more fluorescence effects than with common forensic light sources. This paper shows that the use of hyperspectral imaging is suited for the analysis of latent traces and extends the classical concept to the conservation of the crime scene for retrospective laboratory analysis. We examine specimen of blood, semen and saliva traces in several dilution steps, prepared on cardboard substrate. As our key result we successfully make latent traces visible up to dilution factor of 1:8000. We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the shortwave infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood classifier, assuming normally distributed data. Further, we use Random Forest as a competitive approach. The classifiers retrieve the exact positions of labelled trace preparation up to highest dilution and determine posterior probabilities. By modelling the classification task with a Markov Random Field we are able to integrate prior information about the spatial relation of neighboured pixel labels.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
The National Shipbuilding Research Program Executive Summary Robotics in Shipbuilding Workshop
1981-01-01
based on technoeconomic analysis and consideration environment. of working c-c-2 (3) The conceptual designs were based on application of commercial...results of our study. We identified shipbuilding tasks that should be performed by industrial robots based on technoeconomic and working-life incentives...is the TV image of the illuminated workplaces. The image is analyzed by the computer. The analysis includes noise rejection and fitting of straight
Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning
Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L.; Sunaert, Stefan
2016-01-01
Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899
Performing particle image velocimetry using artificial neural networks: a proof-of-concept
NASA Astrophysics Data System (ADS)
Rabault, Jean; Kolaas, Jostein; Jensen, Atle
2017-12-01
Traditional programs based on feature engineering are underperforming on a steadily increasing number of tasks compared with artificial neural networks (ANNs), in particular for image analysis. Image analysis is widely used in fluid mechanics when performing particle image velocimetry (PIV) and particle tracking velocimetry (PTV), and therefore it is natural to test the ability of ANNs to perform such tasks. We report for the first time the use of convolutional neural networks (CNNs) and fully connected neural networks (FCNNs) for performing end-to-end PIV. Realistic synthetic images are used for training the networks and several synthetic test cases are used to assess the quality of each network’s predictions and compare them with state-of-the-art PIV software. In addition, we present tests on real-world data that prove ANNs can be used not only with synthetic images but also with more noisy, imperfect images obtained in a real experimental setup. While the ANNs we present have slightly higher root mean square error than state-of-the-art cross-correlation methods, they perform better near edges and allow for higher spatial resolution than such methods. In addition, it is likely that one could with further work develop ANNs which perform better that the proof-of-concept we offer.
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
Voyvodic, James T.; Glover, Gary H.; Greve, Douglas; Gadde, Syam
2011-01-01
Functional magnetic resonance imaging (fMRI) is based on correlating blood oxygen-level dependent (BOLD) signal fluctuations in the brain with other time-varying signals. Although the most common reference for correlation is the timing of a behavioral task performed during the scan, many other behavioral and physiological variables can also influence fMRI signals. Variations in cardiac and respiratory functions in particular are known to contribute significant BOLD signal fluctuations. Variables such as skin conduction, eye movements, and other measures that may be relevant to task performance can also be correlated with BOLD signals and can therefore be used in image analysis to differentiate multiple components in complex brain activity signals. Combining real-time recording and data management of multiple behavioral and physiological signals in a way that can be routinely used with any task stimulus paradigm is a non-trivial software design problem. Here we discuss software methods that allow users control of paradigm-specific audio–visual or other task stimuli combined with automated simultaneous recording of multi-channel behavioral and physiological response variables, all synchronized with sub-millisecond temporal accuracy. We also discuss the implementation and importance of real-time display feedback to ensure data quality of all recorded variables. Finally, we discuss standards and formats for storage of temporal covariate data and its integration into fMRI image analysis. These neuroinformatics methods have been adopted for behavioral task control at all sites in the Functional Biomedical Informatics Research Network (FBIRN) multi-center fMRI study. PMID:22232596
NASA Astrophysics Data System (ADS)
Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas
1996-04-01
The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Gloss discrimination and eye movements
NASA Astrophysics Data System (ADS)
Phillips, Jonathan B.; Ferwerda, James A.; Nunziata, Ann
2010-02-01
Human observers are able to make fine discriminations of surface gloss. What cues are they using to perform this task? In previous studies, we identified two reflection-related cues-the contrast of the reflected image (c, contrast gloss) and the sharpness of reflected image (d, distinctness-of-image gloss)--but these were for objects rendered in standard dynamic range (SDR) images with compressed highlights. In ongoing work, we are studying the effects of image dynamic range on perceived gloss, comparing high dynamic range (HDR) images with accurate reflections and SDR images with compressed reflections. In this paper, we first present the basic findings of this gloss discrimination study then present an analysis of eye movement recordings that show where observers were looking during the gloss discrimination task. The results indicate that: 1) image dynamic range has significant influence on perceived gloss, with surfaces presented in HDR images being seen as glossier and more discriminable than their SDR counterparts; 2) observers look at both light source highlights and environmental interreflections when judging gloss; and 3) both of these results are modulated by surface geometry and scene illumination.
American and Greek Children's Visual Images of Scientists
NASA Astrophysics Data System (ADS)
Christidou, Vasilia; Bonoti, Fotini; Kontopoulou, Argiro
2016-08-01
This study explores American and Greek primary pupils' visual images of scientists by means of two nonverbal data collection tasks to identify possible convergences and divergences. Specifically, it aims to investigate whether their images of scientists vary according to the data collection instrument used and to gender. To this end, 91 third-grade American ( N = 46) and Greek ( N = 45) pupils were examined. Data collection was conducted through a drawing task based on Chambers (
3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Histology image analysis for carcinoma detection and grading
He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.
2012-01-01
This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890
SPARX, a new environment for Cryo-EM image processing.
Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J
2007-01-01
SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-01-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-06-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Kalpathy-Cramer, Jayashree; de Herrera, Alba García Seco; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning
2014-01-01
Medical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created. PMID:24746250
ERIC Educational Resources Information Center
Sommer, Iris E. C.; Aleman, Andre; Bouma, Anke; Kahn, Rene S.
2004-01-01
Sex differences in cognition are consistently reported, men excelling in most visuospatial tasks and women in certain verbal tasks. It has been hypothesized that these sex differences in cognition results from a more bilateral pattern of language representation in women than in men. This bilateral pattern of language representation in women is…
Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong
2016-01-01
Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females. PMID:27764116
Gao, Xiao; Deng, Xiao; Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong
2016-01-01
Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females.
Roland, Jarod L; Griffin, Natalie; Hacker, Carl D; Vellimana, Ananth K; Akbari, S Hassan; Shimony, Joshua S; Smyth, Matthew D; Leuthardt, Eric C; Limbrick, David D
2017-12-01
OBJECTIVE Cerebral mapping for surgical planning and operative guidance is a challenging task in neurosurgery. Pediatric patients are often poor candidates for many modern mapping techniques because of inability to cooperate due to their immature age, cognitive deficits, or other factors. Resting-state functional MRI (rs-fMRI) is uniquely suited to benefit pediatric patients because it is inherently noninvasive and does not require task performance or significant cooperation. Recent advances in the field have made mapping cerebral networks possible on an individual basis for use in clinical decision making. The authors present their initial experience translating rs-fMRI into clinical practice for surgical planning in pediatric patients. METHODS The authors retrospectively reviewed cases in which the rs-fMRI analysis technique was used prior to craniotomy in pediatric patients undergoing surgery in their institution. Resting-state analysis was performed using a previously trained machine-learning algorithm for identification of resting-state networks on an individual basis. Network maps were uploaded to the clinical imaging and surgical navigation systems. Patient demographic and clinical characteristics, including need for sedation during imaging and use of task-based fMRI, were also recorded. RESULTS Twenty patients underwent rs-fMRI prior to craniotomy between December 2013 and June 2016. Their ages ranged from 1.9 to 18.4 years, and 12 were male. Five of the 20 patients also underwent task-based fMRI and one underwent awake craniotomy. Six patients required sedation to tolerate MRI acquisition, including resting-state sequences. Exemplar cases are presented including anatomical and resting-state functional imaging. CONCLUSIONS Resting-state fMRI is a rapidly advancing field of study allowing for whole brain analysis by a noninvasive modality. It is applicable to a wide range of patients and effective even under general anesthesia. The nature of resting-state analysis precludes any need for task cooperation. These features make rs-fMRI an ideal technology for cerebral mapping in pediatric neurosurgical patients. This review of the use of rs-fMRI mapping in an initial pediatric case series demonstrates the feasibility of utilizing this technique in pediatric neurosurgical patients. The preliminary experience presented here is a first step in translating this technique to a broader clinical practice.
NASA Astrophysics Data System (ADS)
Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten
2014-03-01
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542
Platiša, Ljiljana; Brantegem, Leen Van; Kumcu, Asli; Ducatelle, Richard; Philips, Wilfried
2017-01-01
Abstract. Despite the current rapid advance in technologies for whole slide imaging, there is still no scientific consensus on the recommended methodology for image quality assessment of digital pathology slides. For medical images in general, it has been recommended to assess image quality in terms of doctors’ success rates in performing a specific clinical task while using the images (clinical image quality, cIQ). However, digital pathology is a new modality, and already identifying the appropriate task is difficult. In an alternative common approach, humans are asked to do a simpler task such as rating overall image quality (perceived image quality, pIQ), but that involves the risk of nonclinically relevant findings due to an unknown relationship between the pIQ and cIQ. In this study, we explored three different experimental protocols: (1) conducting a clinical task (detecting inclusion bodies), (2) rating image similarity and preference, and (3) rating the overall image quality. Additionally, within protocol 1, overall quality ratings were also collected (task-aware pIQ). The experiments were done by diagnostic veterinary pathologists in the context of evaluating the quality of hematoxylin and eosin-stained digital pathology slides of animal tissue samples under several common image alterations: additive noise, blurring, change in gamma, change in color saturation, and JPG compression. While the size of our experiments was small and prevents drawing strong conclusions, the results suggest the need to define a clinical task. Importantly, the pIQ data collected under protocols 2 and 3 did not always rank the image alterations the same as their cIQ from protocol 1, warning against using conventional pIQ to predict cIQ. At the same time, there was a correlation between the cIQ and task-aware pIQ ratings from protocol 1, suggesting that the clinical experiment context (set by specifying the clinical task) may affect human visual attention and bring focus to their criteria of image quality. Further research is needed to assess whether and for which purposes (e.g., preclinical testing) task-aware pIQ ratings could substitute cIQ for a given clinical task. PMID:28653011
Platiša, Ljiljana; Brantegem, Leen Van; Kumcu, Asli; Ducatelle, Richard; Philips, Wilfried
2017-04-01
Despite the current rapid advance in technologies for whole slide imaging, there is still no scientific consensus on the recommended methodology for image quality assessment of digital pathology slides. For medical images in general, it has been recommended to assess image quality in terms of doctors' success rates in performing a specific clinical task while using the images (clinical image quality, cIQ). However, digital pathology is a new modality, and already identifying the appropriate task is difficult. In an alternative common approach, humans are asked to do a simpler task such as rating overall image quality (perceived image quality, pIQ), but that involves the risk of nonclinically relevant findings due to an unknown relationship between the pIQ and cIQ. In this study, we explored three different experimental protocols: (1) conducting a clinical task (detecting inclusion bodies), (2) rating image similarity and preference, and (3) rating the overall image quality. Additionally, within protocol 1, overall quality ratings were also collected (task-aware pIQ). The experiments were done by diagnostic veterinary pathologists in the context of evaluating the quality of hematoxylin and eosin-stained digital pathology slides of animal tissue samples under several common image alterations: additive noise, blurring, change in gamma, change in color saturation, and JPG compression. While the size of our experiments was small and prevents drawing strong conclusions, the results suggest the need to define a clinical task. Importantly, the pIQ data collected under protocols 2 and 3 did not always rank the image alterations the same as their cIQ from protocol 1, warning against using conventional pIQ to predict cIQ. At the same time, there was a correlation between the cIQ and task-aware pIQ ratings from protocol 1, suggesting that the clinical experiment context (set by specifying the clinical task) may affect human visual attention and bring focus to their criteria of image quality. Further research is needed to assess whether and for which purposes (e.g., preclinical testing) task-aware pIQ ratings could substitute cIQ for a given clinical task.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Steganalysis Techniques for Documents and Images
2005-05-01
steganography . We then illustrated the efficacy of our model using variations of LSB steganography . For binary images , we have made significant progress in...efforts have focused on two areas. The first area is LSB steganalysis for grayscale images . Here, as we had proposed (as a challenging task), we have...generalized our previous steganalysis technique of sample pair analysis to a theoretical framework for the detection of the LSB steganography . The new
SIMA: Python software for analysis of dynamic fluorescence imaging data.
Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila
2014-01-01
Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network
2018-01-01
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
NASA Astrophysics Data System (ADS)
Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos
2015-09-01
As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.
NASA Astrophysics Data System (ADS)
Florindo, João. Batista
2018-04-01
This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.
NASA Astrophysics Data System (ADS)
Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin
2017-10-01
Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.
Learning semantic histopathological representation for basal cell carcinoma classification
NASA Astrophysics Data System (ADS)
Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo
2013-03-01
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
HabEx Optical Telescope Concepts: Design and Performance Analysis
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; NASA MSFC HabEx Telescope Design Team
2018-01-01
The Habitable-Exoplanet Imaging Mission (HabEx) engineering study team has been tasked by NASA with developing a compelling and feasible exoplanet direct imaging concept as part of the 2020 Decadal Survey. This paper summarizes design concepts for two off-axis unobscured telescope concepts: a 4-meter monolithic aperture and a 6-meter segmented aperutre. HabEx telescopes are designed for launch vehicle accommodation. Analysis includes prediction of on-orbit dynamic structural and thermal optical performance.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Meyer, Georg F; Spray, Amy; Fairlie, Jo E; Uomini, Natalie T
2014-01-01
Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.
Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images
ERIC Educational Resources Information Center
Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…
Application of LANDSAT TM images to assess circulation and dispersion in coastal lagoons
NASA Technical Reports Server (NTRS)
Kjerfve, B.; Jensen, J. R.; Magill, K. E.
1986-01-01
The main objectives are formulated around a four pronged work approach, consisting of tasks related to: image processing and analysis of LANDSAT thematic mapping; numerical modeling of circulation and dispersion; hydrographic and spectral radiation field sampling/ground truth data collection; and special efforts to focus the investigation on turbid coastal/estuarine fronts.
The Images and Emotions of Bilingual Chinese Readers: A Dual Coding Analysis.
ERIC Educational Resources Information Center
Steffensen, Margaret S.; Goetz, Ernest T.; Cheng, Xiaoguang
1999-01-01
Investigates the nonverbal aspects of bilingual reading with 24 Chinese students who rated text segments for strength of imagery and emotional response. Provides insights into how the bilingual mind accomplishes the task of transforming images on a page into a message that allows the reader to enter and live in a created world. (NH)
Ketcha, M D; de Silva, T; Han, R; Uneri, A; Goerres, J; Jacobson, M; Vogt, S; Kleinszig, G; Siewerdsen, J H
2017-02-11
In image-guided procedures, image acquisition is often performed primarily for the task of geometrically registering information from another image dataset, rather than detection / visualization of a particular feature. While the ability to detect a particular feature in an image has been studied extensively with respect to image quality characteristics (noise, resolution) and is an ongoing, active area of research, comparatively little has been accomplished to relate such image quality characteristics to registration performance. To establish such a framework, we derived Cramer-Rao lower bounds (CRLB) for registration accuracy, revealing the underlying dependencies on image variance and gradient strength. The CRLB was analyzed as a function of image quality factors (in particular, dose) for various similarity metrics and compared to registration accuracy using CT images of an anthropomorphic head phantom at various simulated dose levels. Performance was evaluated in terms of root mean square error (RMSE) of the registration parameters. Analysis of the CRLB shows two primary dependencies: 1) noise variance (related to dose); and 2) sum of squared image gradients (related to spatial resolution and image content). Comparison of the measured RMSE to the CRLB showed that the best registration method, RMSE achieved the CRLB to within an efficiency factor of 0.21, and optimal estimators followed the predicted inverse proportionality between registration performance and radiation dose. Analysis of the CRLB for image registration is an important step toward understanding and evaluating an intraoperative imaging system with respect to a registration task. While the CRLB is optimistic in absolute performance, it reveals a basis for relating the performance of registration estimators as a function of noise content and may be used to guide acquisition parameter selection (e.g., dose) for purposes of intraoperative registration.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.
Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita
2014-04-01
Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Bodzon-Kulakowska, Anna; Marszalek-Grabska, Marta; Antolak, Anna; Drabik, Anna; Kotlinska, Jolanta H; Suder, Piotr
Data analysis from mass spectrometry imaging (MSI) imaging experiments is a very complex task. Most of the software packages devoted to this purpose are designed by the mass spectrometer manufacturers and, thus, are not freely available. Laboratories developing their own MS-imaging sources usually do not have access to the commercial software, and they must rely on the freely available programs. The most recognized ones are BioMap, developed by Novartis under Interactive Data Language (IDL), and Datacube, developed by the Dutch Foundation for Fundamental Research of Matter (FOM-Amolf). These two systems were used here for the analysis of images received from rat brain tissues subjected to morphine influence and their capabilities were compared in terms of ease of use and the quality of obtained results.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.
2006-03-01
One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.
Arctic sea-ice variations from time-lapse passive microwave imagery
Campbell, W.J.; Ramseier, R.O.; Zwally, H.J.; Gloersen, P.
1980-01-01
This paper presents: (1) a short historical review of the passive microwave research on sea ice which established the observational and theoretical base permitting the interpretation of the first passive microwave images of Earth obtained by the Nimbus-5 ESMR; (2) the construction of a time-lapse motion picture film of a 16-month set of serial ESMR images to aid in the formidable data analysis task; and (3) a few of the most significant findings resulting from an early analysis of these data, using selected ESMR images to illustrate these findings. ?? 1980 D. Reidel Publishing Co.
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
Takamura, T; Hanakawa, T
2017-07-01
Although functional magnetic resonance imaging (fMRI) has long been used to assess task-related brain activity in neuropsychiatric disorders, it has not yet become a widely available clinical tool. Resting-state fMRI (rs-fMRI) has been the subject of recent attention in the fields of basic and clinical neuroimaging research. This method enables investigation of the functional organization of the brain and alterations of resting-state networks (RSNs) in patients with neuropsychiatric disorders. Rs-fMRI does not require participants to perform a demanding task, in contrast to task fMRI, which often requires participants to follow complex instructions. Rs-fMRI has a number of advantages over task fMRI for application with neuropsychiatric patients, for example, although applications of task fMR to participants for healthy are easy. However, it is difficult to apply these applications to patients with psychiatric and neurological disorders, because they may have difficulty in performing demanding cognitive task. Here, we review the basic methodology and analysis techniques relevant to clinical studies, and the clinical applications of the technique for examining neuropsychiatric disorders, focusing on mood disorders (major depressive disorder and bipolar disorder) and dementia (Alzheimer's disease and mild cognitive impairment).
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Gennari, Silvia P; Millman, Rebecca E; Hymers, Mark; Mattys, Sven L
2018-06-12
Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load). Performing the speech task under high load, compared to low load, resulted in decreased activity in pSTG/pMTG and increased activity in visual occipital cortex and two regions known to contribute to visual attention regulation-the superior parietal lobule (SPL) and the paracingulate and anterior cingulate gyrus (PaCG, ACG). Critically, activity in PaCG/ACG was correlated with performance in the visual task and with activity in pSTG/pMTG: Increased activity in PaCG/ACG was observed for individuals with poorer visual performance and with decreased activity in pSTG/pMTG. Moreover, activity in a pSTG/pMTG seed region showed psychophysiological interactions with areas of the PaCG/ACG, with stronger interaction in the high-load than the low-load condition. These findings show that the acoustic analysis of speech is affected by the demands of a concurrent visual task and that the PaCG/ACG plays a role in allocating cognitive resources to concurrent auditory and visual information. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
Display management subsystem, version 1: A user's eye view
NASA Technical Reports Server (NTRS)
Parker, Dolores
1986-01-01
The structure and application functions of the Display Management Subsystem (DMS) are described. The DMS, a subsystem of the Transportable Applications Executive (TAE), was designed to provide a device-independent interface for an image processing and display environment. The system is callable by C and FORTRAN applications, portable to accommodate different image analysis terminals, and easily expandable to meet local needs. Generic applications are also available for performing many image processing tasks.
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Analysis on the application of background parameters on remote sensing classification
NASA Astrophysics Data System (ADS)
Qiao, Y.
Drawing accurate crop cultivation acreage, dynamic monitoring of crops growing and yield forecast are some important applications of remote sensing to agriculture. During the 8th 5-Year Plan period, the task of yield estimation using remote sensing technology for the main crops in major production regions in China once was a subtopic to the national research task titled "Study on Application of Remote sensing Technology". In 21 century in a movement launched by Chinese Ministry of Agriculture to combine high technology to farming production, remote sensing has given full play to farm crops' growth monitoring and yield forecast. And later in 2001 Chinese Ministry of Agriculture entrusted the Northern China Center of Agricultural Remote Sensing to forecast yield of some main crops like wheat, maize and rice in rather short time to supply information for the government decision maker. Present paper is a report for this task. It describes the application of background parameters in image recognition, classification and mapping with focuses on plan of the geo-science's theory, ecological feature and its cartographical objects or scale, the study of phrenology for image optimal time for classification of the ground objects, the analysis of optimal waveband composition and the application of background data base to spatial information recognition ;The research based on the knowledge of background parameters is indispensable for improving the accuracy of image classification and mapping quality and won a secondary reward of tech-science achievement from Chinese Ministry of Agriculture. Keywords: Spatial image; Classification; Background parameter
van Ruitenbeek, Peter; Serbruyns, Leen; Solesio-Jofre, Elena; Meesen, Raf; Cuypers, Koen; Swinnen, Stephan P
2017-01-01
Declines in both cortical grey matter and bimanual coordination performance are evident in healthy ageing. However, the relationship between ageing, bimanual performance, and grey matter loss remains unclear, particularly across the whole adult lifespan. Therefore, participants (N = 93, range 20-80 years) performed a complex Bimanual Tracking Task, and structural brain images were obtained using magnetic resonance imaging. Analyses revealed that age correlated negatively with task performance. Voxel-based morphometry analysis revealed that age was associated with grey matter declines in task-relevant cortical areas and that grey matter in these areas was negatively associated with task performance. However, no evidence for a mediating effect of grey matter in age-related bimanual performance decline was observed. We propose a new hypothesis that functional compensation may account for the observed absence of mediation, which is in line with the observed pattern of increased inter-individual variance in performance with age.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
NASA Astrophysics Data System (ADS)
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
Ideal AFROC and FROC observers.
Khurd, Parmeshwar; Liu, Bin; Gindi, Gene
2010-02-01
Detection of multiple lesions in images is a medically important task and free-response receiver operating characteristic (FROC) analyses and its variants, such as alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis techniques for lesion detection in medical images. In this paper, we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area, or any partial area, under the associated AFROC or FROC curve. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We also consider a Bayes risk analysis for the multiple-signal detection task with an appropriate definition of costs. A general decision strategy that minimizes Bayes risk is derived. With particular cost constraints, this general decision strategy reduces to the decision strategy associated with the ideal AFROC or FROC observer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smitherman, C; Chen, B; Samei, E
2014-06-15
Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less
A survey on deep learning in medical image analysis.
Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I
2017-12-01
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.
Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain.
Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W; Altmeyer, Wilson; Gutierrez, Juan E; Li, Jinqi; Lancaster, Jack L; Gonzalez-Lima, Francisco; Duong, Timothy Q
2016-11-01
Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22-62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9-3.4, P = .01-.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9-4.2, P = .03-.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article.
Lack of sex effect on brain activity during a visuomotor response task: functional MR imaging study.
Mikhelashvili-Browner, Nina; Yousem, David M; Wu, Colin; Kraut, Michael A; Vaughan, Christina L; Oguz, Kader Karli; Calhoun, Vince D
2003-03-01
As more individuals are enrolled in clinical functional MR imaging (fMRI) studies, an understanding of how sex may influence fMRI-measured brain activation is critical. We used fixed- and random-effects models to study the influence of sex on fMRI patterns of brain activation during a simple visuomotor reaction time task in the group of 26 age-matched men and women. We evaluated the right visual, left visual, left primary motor, left supplementary motor, and left anterior cingulate areas. Volumes of activations did not significantly differ between the groups in any defined regions. Analysis of variance failed to show any significant correlations between sex and volumes of brain activation in any location studied. Mean percentage signal-intensity changes for all locations were similar between men and women. A two-way t test of brain activation in men and women, performed as a part of random-effects modeling, showed no significant difference at any site. Our results suggest that sex seems to have little influence on fMRI brain activation when we compared performance on the simple reaction-time task. The need to control for sex effects is not critical in the analysis of this task with fMRI.
ERIC Educational Resources Information Center
El-Gazzar, Abdel-Latif I.
The relative effectiveness of digital versus photographic images was examined with 96 college students as subjects. A 2x2 balanced factorial design was employed to test eight hypotheses. The four groups were (1) digitized black and white; (2) digitized pseudocolor; (3) photographic black and white; and (4) photographic realistic color. Findings…
Fast interactive exploration of 4D MRI flow data
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.
2011-03-01
1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Karmali, M. S.
1983-01-01
This study was devoted to an investigation of the feasibility of applying advanced image processing techniques to enhance radar image characteristics that are pertinent to the pilot's navigation and guidance task. Millimeter (95 GHz) wave radar images for the overwater (i.e., offshore oil rigs) and overland (Heliport) scenario were used as a data base. The purpose of the study was to determine the applicability of image enhancement and scene analysis algorithms to detect and improve target characteristics (i.e., manmade objects such as buildings, parking lots, cars, roads, helicopters, towers, landing pads, etc.) that would be helpful to the pilot in determining his own position/orientation with respect to the outside world and assist him in the navigation task. Results of this study show that significant improvements in the raw radar image may be obtained using two dimensional image processing algorithms. In the overwater case, it is possible to remove the ocean clutter by thresholding the image data, and furthermore to extract the target boundary as well as the tower and catwalk locations using noise cleaning (e.g., median filter) and edge detection (e.g., Sobel operator) algorithms.
Learning deep similarity in fundus photography
NASA Astrophysics Data System (ADS)
Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew
2017-02-01
Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.
Kiyuna, Asanori; Kise, Norimoto; Hiratsuka, Munehisa; Kondo, Shunsuke; Uehara, Takayuki; Maeda, Hiroyuki; Ganaha, Akira; Suzuki, Mikio
2017-05-01
Spasmodic dysphonia (SD) is considered a focal dystonia. However, the detailed pathophysiology of SD remains unclear, despite the detection of abnormal activity in several brain regions. The aim of this study was to clarify the pathophysiological background of SD. This is a case-control study. Both task-related brain activity measured by functional magnetic resonance imaging by reading the five-digit numbers and resting-state functional connectivity (FC) measured by 150 T2-weighted echo planar images acquired without any task were investigated in 12 patients with adductor SD and in 16 healthy controls. The patients with SD showed significantly higher task-related brain activation in the left middle temporal gyrus, left thalamus, bilateral primary motor area, bilateral premotor area, bilateral cerebellum, bilateral somatosensory area, right insula, and right putamen compared with the controls. Region of interest voxel FC analysis revealed many FC changes within the cerebellum-basal ganglia-thalamus-cortex loop in the patients with SD. Of the significant connectivity changes between the patients with SD and the controls, the FC between the left thalamus and the left caudate nucleus was significantly correlated with clinical parameters in SD. The higher task-related brain activity in the insula and cerebellum was consistent with previous neuroimaging studies, suggesting that these areas are one of the unique characteristics of phonation-induced brain activity in SD. Based on FC analysis and their significant correlations with clinical parameters, the basal ganglia network plays an important role in the pathogenesis of SD. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Kozora, E; Uluğ, A M; Erkan, D; Vo, A; Filley, C M; Ramon, G; Burleson, A; Zimmerman, R; Lockshin, M D
2016-11-01
Standardized cognitive tests and functional magnetic resonance imaging (fMRI) studies of systemic lupus erythematosus (SLE) patients demonstrate deficits in working memory and executive function. These neurobehavioral abnormalities are not well studied in antiphospholipid syndrome, which may occur independently of or together with SLE. This study compares an fMRI paradigm involving motor skills, working memory, and executive function in SLE patients without antiphospholipid antibody (aPL) (the SLE group), aPL-positive non-SLE patients (the aPL-positive group), and controls. Brain MRI, fMRI, and standardized cognitive assessment results were obtained from 20 SLE, 20 aPL-positive, and 10 healthy female subjects with no history of neuropsychiatric abnormality. Analysis of fMRI data showed no differences in performance across groups on bilateral motor tasks. When analysis of variance was used, significant group differences were found in 2 executive function tasks (word generation and word rhyming) and in a working memory task (N-Back). Patients positive for aPL demonstrated higher activation in bilateral frontal, temporal, and parietal cortices compared to controls during working memory and executive function tasks. SLE patients also demonstrated bilateral frontal and temporal activation during working memory and executive function tasks. Compared to controls, both aPL-positive and SLE patients had elevated cortical activation, primarily in the frontal lobes, during tasks involving working memory and executive function. These findings are consistent with cortical overactivation as a compensatory mechanism for early white matter neuropathology in these disorders. © 2016, American College of Rheumatology.
Task-based measures of image quality and their relation to radiation dose and patient risk
Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.
2015-01-01
The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960
Measuring saliency in images: which experimental parameters for the assessment of image quality?
NASA Astrophysics Data System (ADS)
Fredembach, Clement; Woolfe, Geoff; Wang, Jue
2012-01-01
Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a set of parameters, tasks and images that can be used to compare the various saliency prediction methods in a manner that is meaningful for image quality assessment.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Mathematics education graduate students' understanding of trigonometric ratios
NASA Astrophysics Data System (ADS)
Yiǧit Koyunkaya, Melike
2016-10-01
This study describes mathematics education graduate students' understanding of relationships between sine and cosine of two base angles in a right triangle. To explore students' understanding of these relationships, an elaboration of Skemp's views of instrumental and relational understanding using Tall and Vinner's concept image and concept definition was developed. Nine students volunteered to complete three paper and pencil tasks designed to elicit evidence of understanding and three students among these nine students volunteered for semi-structured interviews. As a result of fine-grained analysis of the students' responses to the tasks, the evidence of concept image and concept definition as well as instrumental and relational understanding of trigonometric ratios was found. The unit circle and a right triangle were identified as students' concept images, and the mnemonic was determined as their concept definition for trigonometry, specifically for trigonometric ratios. It is also suggested that students had instrumental understanding of trigonometric ratios while they were less flexible to act on trigonometric ratio tasks and had limited relational understanding. Additionally, the results indicate that graduate students' understanding of the concept of angle mediated their understanding of trigonometry, specifically trigonometric ratios.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
System design and implementation of digital-image processing using computational grids
NASA Astrophysics Data System (ADS)
Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping
2005-06-01
As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
Model-based segmentation of hand radiographs
NASA Astrophysics Data System (ADS)
Weiler, Frank; Vogelsang, Frank
1998-06-01
An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.
NASA Astrophysics Data System (ADS)
Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei
2001-06-01
In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Intrasubject multimodal groupwise registration with the conditional template entropy.
Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef
2018-05-01
Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Assessment of CT image quality using a Bayesian approach
NASA Astrophysics Data System (ADS)
Reginatto, M.; Anton, M.; Elster, C.
2017-08-01
One of the most promising approaches for evaluating CT image quality is task-specific quality assessment. This involves a simplified version of a clinical task, e.g. deciding whether an image belongs to the class of images that contain the signature of a lesion or not. Task-specific quality assessment can be done by model observers, which are mathematical procedures that carry out the classification task. The most widely used figure of merit for CT image quality is the area under the ROC curve (AUC), a quantity which characterizes the performance of a given model observer. In order to estimate AUC from a finite sample of images, different approaches from classical statistics have been suggested. The goal of this paper is to introduce task-specific quality assessment of CT images to metrology and to propose a novel Bayesian estimation of AUC for the channelized Hotelling observer (CHO) applied to the task of detecting a lesion at a known image location. It is assumed that signal-present and signal-absent images follow multivariate normal distributions with the same covariance matrix. The Bayesian approach results in a posterior distribution for the AUC of the CHO which provides in addition a complete characterization of the uncertainty of this figure of merit. The approach is illustrated by its application to both simulated and experimental data.
NASA Technical Reports Server (NTRS)
Natesh, R.
1978-01-01
The various steps involved in obtaining quantitative information of structural defects in crystalline silicon samples are described. Procedures discussed include: (1) chemical polishing; (2) chemical etching; and (3) automated image analysis of samples on the QTM 720 System.
Fast and objective detection and analysis of structures in downhole images
NASA Astrophysics Data System (ADS)
Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick
2017-09-01
Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522
Visual cue-specific craving is diminished in stressed smokers.
Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R
2017-09-01
Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.
2001-10-25
a CT image, each voxel contains an integer number which is the CT value, in Hounsfield units (HU), of the voxel. Therefore, the standard method of...Task Number Work Unit Number Performing Organization Name(s) and Address(es) Department of Electrical and Computer Engineering, University of...34, Journal of Pediatric Surgery, vol 24(7), pp. 708-711, 1989. [4] I. N. Bankman, editor, Handbook of Medical Image Analysis, Academic Press, London, UK
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.
Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel
2017-08-22
Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.
PROS: An IRAF based system for analysis of x ray data
NASA Technical Reports Server (NTRS)
Conroy, M. A.; Deponte, J.; Moran, J. F.; Orszak, J. S.; Roberts, W. P.; Schmidt, D.
1992-01-01
PROS is an IRAF based software package for the reduction and analysis of x-ray data. The use of a standard, portable, integrated environment provides for both multi-frequency and multi-mission analysis. The analysis of x-ray data differs from optical analysis due to the nature of the x-ray data and its acquisition during constantly varying conditions. The scarcity of data, the low signal-to-noise ratio and the large gaps in exposure time make data screening and masking an important part of the analysis. PROS was developed to support the analysis of data from the ROSAT and Einstein missions but many of the tasks have been used on data from other missions. IRAF/PROS provides a complete end-to-end system for x-ray data analysis: (1) a set of tools for importing and exporting data via FITS format -- in particular, IRAF provides a specialized event-list format, QPOE, that is compatible with its IMAGE (2-D array) format; (2) a powerful set of IRAF system capabilities for both temporal and spatial event filtering; (3) full set of imaging and graphics tasks; (4) specialized packages for scientific analysis such as spatial, spectral and timing analysis -- these consist of both general and mission specific tasks; and (5) complete system support including ftp and magnetic tape releases, electronic and conventional mail hotline support, electronic mail distribution of solutions to frequently asked questions and current known bugs. We will discuss the philosophy, architecture and development environment used by PROS to generate a portable, multimission software environment. PROS is available on all platforms that support IRAF, including Sun/Unix, VAX/VMS, HP, and Decstations. It is available on request at no charge.
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less
Vision sensing techniques in aeronautics and astronautics
NASA Technical Reports Server (NTRS)
Hall, E. L.
1988-01-01
The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.
Monkeys rely on recency of stimulus repetition when solving short-term memory tasks.
Wittig, John H; Richmond, Barry J
2014-05-16
Seven monkeys performed variants of two short-term memory tasks that others have used to differentiate between selective and nonselective memory mechanisms. The first task was to view a list of sequentially presented images and identify whether a test matched any image from the list, but not a distractor from a preceding list. Performance was best when the test matched the most recently presented image. Response rates depended linearly on recency of repetition whether the test matched a sample from the current list or a distractor from a preceding list, suggesting nonselective memorization of all images viewed instead of just the sample images. The second task was to remember just the first image in a list selectively and ignore subsequent distractors. False alarms occurred frequently when the test matched a distractor presented near the beginning of the sequence. In a pilot experiment, response rates depended linearly on recency of repetition irrespective of whether the test matched the first image or a distractor, again suggesting nonselective memorization of all images instead of just the first image. Modification of the second task improved recognition of the first image, but did not abolish use of recency. Monkeys appear to perform nonspatial visual short-term memory tasks often (or exclusively) using a single, nonselective, memory mechanism that conveys the recency of stimulus repetition. Published by Cold Spring Harbor Laboratory Press.
The effect of fMRI task combinations on determining the hemispheric dominance of language functions.
Niskanen, Eini; Könönen, Mervi; Villberg, Ville; Nissi, Mikko; Ranta-Aho, Perttu; Säisänen, Laura; Karjalainen, Pasi; Aikiä, Marja; Kälviäinen, Reetta; Mervaala, Esa; Vanninen, Ritva
2012-04-01
The purpose of this study is to establish the most suitable combination of functional magnetic resonance imaging (fMRI) language tasks for clinical use in determining language dominance and to define the variability in laterality index (LI) and activation power between different combinations of language tasks. Activation patterns of different fMRI analyses of five language tasks (word generation, responsive naming, letter task, sentence comprehension, and word pair) were defined for 20 healthy volunteers (16 right-handed). LIs and sums of T values were calculated for each task separately and for four combinations of tasks in predefined regions of interest. Variability in terms of activation power and lateralization was defined in each analysis. In addition, the visual assessment of lateralization of language functions based on the individual fMRI activation maps was conducted by an experienced neuroradiologist. A combination analysis of word generation, responsive naming, and sentence comprehension was the most suitable in terms of activation power, robustness to detect essential language areas, and scanning time. In general, combination analyses of the tasks provided higher overall activation levels than single tasks and reduced the number of outlier voxels disturbing the calculation of LI. A combination of auditory and visually presented tasks that activate different aspects of language functions with sufficient activation power may be a useful task battery for determining language dominance in patients.
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain
Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W.; Altmeyer, Wilson; Gutierrez, Juan E.; Li, Jinqi; Lancaster, Jack L.; Gonzalez-Lima, Francisco
2016-01-01
Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22–62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9–3.4, P = .01–.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9–4.2, P = .03–.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article. PMID:27351678
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
2012-01-01
Background Alcohol related neurodevelopmental disorder (ARND) falls under the umbrella of fetal alcohol spectrum disorder (FASD), but individuals do not demonstrate the facial characteristics associated with fetal alcohol syndrome (FAS), making diagnosis difficult. While attentional problems in ARND are similar to those found in attention-deficit/hyperactivity disorder (ADHD), the underlying impairment in attention pathways may be different. Methods Functional magnetic resonance imaging (fMRI) of a working memory (1-back) task of 63 children, 10 to 14 years old, diagnosed with ARND and ADHD, as well as typically developing (TD) controls, was conducted at 3 T. Diffusion tensor imaging (DTI) data were also acquired. Results Activations were observed in posterior parietal and occipital regions in the TD group and in dorsolateral prefrontal and posterior parietal regions in the ARND group, whereas the ADHD group activated only dorsolateral prefrontal regions, during the working memory component of the task (1-back minus 0-back contrast). The increases in frontal and parietal activity were significantly greater in the ARND group compared to the other groups. This increased activity was associated with reduced accuracy and increased response time variability, suggesting that ARND subjects exert greater effort to manage short-term memory load. Significantly greater intra-subject variability, demonstrated by fMRI region-of-interest analysis, in the ADHD and ARND groups compared to the TD group suggests that moment-to-moment lapses in attention contributed to their poorer task performance. Differences in functional activity in ARND subjects with and without a diagnosis of ADHD resulted primarily from reduced activation by the ARND/ADHD + group during the 0-back task. In contrast, children with ADHD alone clearly showed reduced activations during the 1-back task. DTI analysis revealed that the TD group had significantly higher total tract volume and number of fibers than the ARND group. These measures were negatively correlated with errors on the 1-back task, suggesting a link between white matter integrity and task performance. Conclusions fMRI activations suggest that the similar behavior of children with ARND and ADHD on a spatial working memory task is the result of different cognitive events. The nature of ADHD in children with ARND appears to differ from that of children with ADHD alone. PMID:22958510
Tensor-product kernel-based representation encoding joint MRI view similarity.
Alvarez-Meza, A; Cardenas-Pena, D; Castro-Ospina, A E; Alvarez, M; Castellanos-Dominguez, G
2014-01-01
To support 3D magnetic resonance image (MRI) analysis, a marginal image similarity (MIS) matrix holding MR inter-slice relationship along every axis view (Axial, Coronal, and Sagittal) can be estimated. However, mutual inference from MIS view information poses a difficult task since relationships between axes are nonlinear. To overcome this issue, we introduce a Tensor-Product Kernel-based Representation (TKR) that allows encoding brain structure patterns due to patient differences, gathering all MIS matrices into a single joint image similarity framework. The TKR training strategy is carried out into a low dimensional projected space to get less influence of voxel-derived noise. Obtained results for classifying the considered patient categories (gender and age) on real MRI database shows that the proposed TKR training approach outperforms the conventional voxel-wise sum of squared differences. The proposed approach may be useful to support MRI clustering and similarity inference tasks, which are required on template-based image segmentation and atlas construction.
Does a 3D Image Improve Laparoscopic Motor Skills?
Folaranmi, Semiu Eniola; Partridge, Roland W; Brennan, Paul M; Hennessey, Iain A M
2016-08-01
To quantitatively determine whether a three-dimensional (3D) image improves laparoscopic performance compared with a two-dimensional (2D) image. This is a prospective study with two groups of participants: novices (5) and experts (5). Individuals within each group undertook a validated laparoscopic task on a box simulator, alternating between 2D and a 3D laparoscopic image until they had repeated the task five times with each imaging modality. A dedicated motion capture camera was used to determine the time taken to complete the task (seconds) and instrument distance traveled (meters). Among the experts, the mean time taken to perform the task on the 3D image was significantly quicker than on the 2D image, 40.2 seconds versus 51.2 seconds, P < .0001. Among the novices, the mean task time again was significantly quicker on the 3D image, 56.4 seconds versus 82.7 seconds, P < .0001. There was no significant difference in the mean time it took a novice to perform the task using a 3D camera compared with an expert on a 2D camera, 56.4 seconds versus 51.3 seconds, P = .3341. The use of a 3D image confers a significant performance advantage over a 2D camera in quantitatively measured laparoscopic skills for both experts and novices. The use of a 3D image appears to improve a novice's performance to the extent that it is not statistically different from an expert using a 2D image.
Marasescu, R; Cerezo Garcia, M; Aladro Benito, Y
2016-04-01
About 20% to 26% of patients with multiple sclerosis (MS) show alterations in visuospatial/visuoconstructive (VS-VC) skills even though temporo-parieto-occipital impairment is a frequent finding in magnetic resonance imaging. No studies have specifically analysed the relationship between these functions and lesion volume (LV) in these specific brain areas. To evaluate the relationship between VS-VC impairment and magnetic resonance imaging temporo-parieto-occipital LV with subcortical atrophy in patients with MS. Of 100 MS patients undergoing a routine neuropsychological evaluation, 21 were selected because they displayed VS-VC impairments in the following tests: Incomplete picture, Block design (WAIS-III), and Rey-Osterrieth complex figure test. We also selected 13 MS patients without cognitive impairment (control group). Regional LV was measured in FLAIR and T1-weighted images using a semiautomated method; subcortical atrophy was measured by bicaudate ratio and third ventricle width. Partial correlations (controlling for age and years of school) and linear regression analysis were employed to analyse correlations between magnetic resonance imaging parameters and cognitive performance. All measures of LV and brain atrophy were significantly higher in patients with cognitive impairment. Regional LV, bicaudate ratio, and third ventricle width are significantly and inversely correlated with cognitive performance; the strongest correlation was between third ventricle width and VC performance (Block design: P=.001; Rey-Osterrieth complex figure: P<.000). In the multivariate analysis, third ventricle width only had a significant effect on performance of VC tasks (Block design: P=.000; Rey-Osterrieth complex figure: P=.000), and regional FLAIR VL was linked to the VS task (Incomplete picture; P=.002). Measures of subcortical atrophy explain the variations in performance on visuocostructive tasks, and regional FLAIR VL measures are linked to VS tasks. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.
Brain functional BOLD perturbation modelling for forward fMRI and inverse mapping
Robinson, Jennifer; Calhoun, Vince
2018-01-01
Purpose To computationally separate dynamic brain functional BOLD responses from static background in a brain functional activity for forward fMRI signal analysis and inverse mapping. Methods A brain functional activity is represented in terms of magnetic source by a perturbation model: χ = χ0 +δχ, with δχ for BOLD magnetic perturbations and χ0 for background. A brain fMRI experiment produces a timeseries of complex-valued images (T2* images), whereby we extract the BOLD phase signals (denoted by δP) by a complex division. By solving an inverse problem, we reconstruct the BOLD δχ dataset from the δP dataset, and the brain χ distribution from a (unwrapped) T2* phase image. Given a 4D dataset of task BOLD fMRI, we implement brain functional mapping by temporal correlation analysis. Results Through a high-field (7T) and high-resolution (0.5mm in plane) task fMRI experiment, we demonstrated in detail the BOLD perturbation model for fMRI phase signal separation (P + δP) and reconstructing intrinsic brain magnetic source (χ and δχ). We also provided to a low-field (3T) and low-resolution (2mm) task fMRI experiment in support of single-subject fMRI study. Our experiments show that the δχ-depicted functional map reveals bidirectional BOLD χ perturbations during the task performance. Conclusions The BOLD perturbation model allows us to separate fMRI phase signal (by complex division) and to perform inverse mapping for pure BOLD δχ reconstruction for intrinsic functional χ mapping. The full brain χ reconstruction (from unwrapped fMRI phase) provides a new brain tissue image that allows to scrutinize the brain tissue idiosyncrasy for the pure BOLD δχ response through an automatic function/structure co-localization. PMID:29351339
Development of Land Analysis System display modules
NASA Technical Reports Server (NTRS)
Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie
1986-01-01
The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.
Detection of Focal Cortical Dysplasia Lesions in MRI Using Textural Features
NASA Astrophysics Data System (ADS)
Loyek, Christian; Woermann, Friedrich G.; Nattkemper, Tim W.
Focal cortical dysplasia (FCD) is a frequent cause of medically refractory partial epilepsy. The visual identification of FCD lesions on magnetic resonance images (MRI) is a challenging task in standard radiological analysis. Quantitative image analysis which tries to assist in the diagnosis of FCD lesions is an active field of research. In this work we investigate the potential of different texture features, in order to explore to what extent they are suitable for detecting lesional tissue. As a result we can show first promising results based on segmentation and texture classification.
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
Ebina, Teppei; Masamizu, Yoshito; Tanaka, Yasuhiro R; Watakabe, Akiya; Hirakawa, Reiko; Hirayama, Yuka; Hira, Riichiro; Terada, Shin-Ichiro; Koketsu, Daisuke; Hikosaka, Kazuo; Mizukami, Hiroaki; Nambu, Atsushi; Sasaki, Erika; Yamamori, Tetsuo; Matsuzaki, Masanori
2018-05-14
Two-photon imaging in behaving animals has revealed neuronal activities related to behavioral and cognitive function at single-cell resolution. However, marmosets have posed a challenge due to limited success in training on motor tasks. Here we report the development of protocols to train head-fixed common marmosets to perform upper-limb movement tasks and simultaneously perform two-photon imaging. After 2-5 months of training sessions, head-fixed marmosets can control a manipulandum to move a cursor to a target on a screen. We conduct two-photon calcium imaging of layer 2/3 neurons in the motor cortex during this motor task performance, and detect task-relevant activity from multiple neurons at cellular and subcellular resolutions. In a two-target reaching task, some neurons show direction-selective activity over the training days. In a short-term force-field adaptation task, some neurons change their activity when the force field is on. Two-photon calcium imaging in behaving marmosets may become a fundamental technique for determining the spatial organization of the cortical dynamics underlying action and cognition.
Functional mapping of language networks in the normal brain using a word-association task.
Ghosh, Shantanu; Basu, Amrita; Kumaran, Senthil S; Khushu, Subash
2010-08-01
Language functions are known to be affected in diverse neurological conditions, including ischemic stroke, traumatic brain injury, and brain tumors. Because language networks are extensive, interpretation of functional data depends on the task completed during evaluation. The aim was to map the hemodynamic consequences of word association using functional magnetic resonance imaging (fMRI) in normal human subjects. Ten healthy subjects underwent fMRI scanning with a postlexical access semantic association task vs lexical processing task. The fMRI protocol involved a T2*-weighted gradient-echo echo-planar imaging (GE-EPI) sequence (TR 4523 ms, TE 64 ms, flip angle 90°) with alternate baseline and activation blocks. A total of 78 scans were taken (interscan interval = 3 s) with a total imaging time of 587 s. Functional data were processed in Statistical Parametric Mapping software (SPM2) with 8-mm Gaussian kernel by convolving the blood oxygenation level-dependent (BOLD) signal with an hemodynamic response function estimated by general linear method to generate SPM{t} and SPM{F} maps. Single subject analysis of the functional data (FWE-corrected, P≤0.001) revealed extensive activation in the frontal lobes, with overlaps among middle frontal gyrus (MFG), superior, and inferior frontal gyri. BOLD activity was also found in the medial frontal gyrus, middle occipital gyrus (MOG), anterior fusiform gyrus, superior and inferior parietal lobules, and to a smaller extent, the thalamus and right anterior cerebellum. Group analysis (FWE-corrected, P≤0.001) revealed neural recruitment of bilateral lingual gyri, left MFG, bilateral MOG, left superior occipital gyrus, left fusiform gyrus, bilateral thalami, and right cerebellar areas. Group data analysis revealed a cerebellar-occipital-fusiform-thalamic network centered around bilateral lingual gyri for word association, thereby indicating how these areas facilitate language comprehension by activating a semantic association network of words processed postlexical access. This finding is important when assessing the extent of cognitive damage and/or recovery and can be used for presurgical planning after optimization.
Bould, Helen; Carnegie, Rebecca; Allward, Heather; Bacon, Emily; Lambe, Emily; Sapseid, Megan; Button, Katherine S; Lewis, Glyn; Skinner, Andy; Broome, Matthew R; Park, Rebecca; Harmer, Catherine J; Penton-Voak, Ian S; Munafò, Marcus R
2018-05-01
Body dissatisfaction is prevalent among women and associated with subsequent obesity and eating disorders. Exposure to images of bodies of different sizes has been suggested to change the perception of 'normal' body size in others. We tested whether exposure to different-sized (otherwise identical) bodies changes perception of own and others' body size, satisfaction with body size and amount of chocolate consumed. In Study 1, 90 18-25-year-old women with normal BMI were randomized into one of three groups to complete a 15 min two-back task using photographs of women either of 'normal weight' (Body Mass Index (BMI) 22-23 kg m -2 ), or altered to appear either under- or over-weight. Study 2 was identical except the 96 participants had high baseline body dissatisfaction and were followed up after 24 h. We also conducted a mega-analysis combining both studies. Participants rated size of others' bodies, own size, and satisfaction with size pre- and post-task. Post-task ratings were compared between groups, adjusting for pre-task ratings. Participants exposed to over- or normal-weight images subsequently perceived others' bodies as smaller, in comparison to those shown underweight bodies ( p < 0.001). They also perceived their own bodies as smaller (Study 1, p = 0.073; Study 2, p = 0.018; mega-analysis, p = 0.001), and felt more satisfied with their size (Study 1, p = 0.046; Study 2, p = 0.004; mega-analysis, p = 0.006). There were no differences in chocolate consumption. This study suggests that a move towards using images of women with a BMI in the healthy range in the media may help to reduce body dissatisfaction, and the associated risk of eating disorders.
Carnegie, Rebecca; Allward, Heather; Bacon, Emily; Lambe, Emily; Sapseid, Megan; Button, Katherine S.; Lewis, Glyn; Skinner, Andy; Broome, Matthew R.; Park, Rebecca; Penton-Voak, Ian S.
2018-01-01
Body dissatisfaction is prevalent among women and associated with subsequent obesity and eating disorders. Exposure to images of bodies of different sizes has been suggested to change the perception of ‘normal’ body size in others. We tested whether exposure to different-sized (otherwise identical) bodies changes perception of own and others' body size, satisfaction with body size and amount of chocolate consumed. In Study 1, 90 18–25-year-old women with normal BMI were randomized into one of three groups to complete a 15 min two-back task using photographs of women either of ‘normal weight’ (Body Mass Index (BMI) 22–23 kg m−2), or altered to appear either under- or over-weight. Study 2 was identical except the 96 participants had high baseline body dissatisfaction and were followed up after 24 h. We also conducted a mega-analysis combining both studies. Participants rated size of others' bodies, own size, and satisfaction with size pre- and post-task. Post-task ratings were compared between groups, adjusting for pre-task ratings. Participants exposed to over- or normal-weight images subsequently perceived others' bodies as smaller, in comparison to those shown underweight bodies (p < 0.001). They also perceived their own bodies as smaller (Study 1, p = 0.073; Study 2, p = 0.018; mega-analysis, p = 0.001), and felt more satisfied with their size (Study 1, p = 0.046; Study 2, p = 0.004; mega-analysis, p = 0.006). There were no differences in chocolate consumption. This study suggests that a move towards using images of women with a BMI in the healthy range in the media may help to reduce body dissatisfaction, and the associated risk of eating disorders. PMID:29892352
Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto
2006-02-15
Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.
Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.
Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua
2017-06-01
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
Towards a computer-aided diagnosis system for vocal cord diseases.
Verikas, A; Gelzinis, A; Bacauskiene, M; Uloza, V
2006-01-01
The objective of this work is to investigate a possibility of creating a computer-aided decision support system for an automated analysis of vocal cord images aiming to categorize diseases of vocal cords. The problem is treated as a pattern recognition task. To obtain a concise and informative representation of a vocal cord image, colour, texture, and geometrical features are used. The representation is further analyzed by a pattern classifier categorizing the image into healthy, diffuse, and nodular classes. The approach developed was tested on 785 vocal cord images collected at the Department of Otolaryngology, Kaunas University of Medicine, Lithuania. A correct classification rate of over 87% was obtained when categorizing a set of unseen images into the aforementioned three classes. Bearing in mind the high similarity of the decision classes, the results obtained are rather encouraging and the developed tools could be very helpful for assuring objective analysis of the images of laryngeal diseases.
Wang, L; Wu, L; Lin, X; Zhang, Y; Zhou, H; Du, X; Dong, G
2016-04-01
The present study identified the neural mechanism of risky decision-making in Internet gaming disorder (IGD) under a probability discounting task. Independent component analysis was used on the functional magnetic resonance imaging data from 19 IGD subjects (22.2 ± 3.08 years) and 21 healthy controls (HC, 22.8 ± 3.5 years). For the behavioral results, IGD subjects prefer the risky to the fixed options and showed shorter reaction time compared to HC. For the imaging results, the IGD subjects showed higher task-related activity in default mode network (DMN) and less engagement in the executive control network (ECN) than HC when making the risky decisions. Also, we found the activities of DMN correlate negatively with the reaction time and the ECN correlate positively with the probability discounting rates. The results suggest that people with IGD show altered modulation in DMN and deficit in executive control function, which might be the reason for why the IGD subjects continue to play online games despite the potential negative consequences. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Visual scan-path analysis with feature space transient fixation moments
NASA Astrophysics Data System (ADS)
Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong
2003-05-01
The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.
Templeton, Justin P.; Struebing, Felix L.; Lemmon, Andrew; Geisert, Eldon E.
2014-01-01
The present article introduces a new and easy to use counting application for the Apple iPad. The application “ImagePAD” takes advantage of the advanced user interface features offered by the Apple iOS® platform, simplifying the rather tedious task of quantifying features in anatomical studies. For example, the image under analysis can be easily panned and zoomed using iOS-supported multi-touch gestures without losing the spatial context of the counting task, which is extremely important for ensuring count accuracy. This application allows one to quantify up to 5 different types of objects in a single field and output the data in a tab-delimited format for subsequent analysis. We describe two examples of the use of the application: quantifying axons in the optic nerve of the C57BL/6J mouse and determining the percentage of cells labeled with NeuN or ChAT in the retinal ganglion cell layer. For the optic nerve, contiguous images at 60× magnification were taken and transferred onto an Apple iPad®. Axons were counted by tapping on the touch-sensitive screen using ImagePAD. Nine optic nerves were sampled and the number of axons in the nerves ranged from 38872 axons to 50196 axons with an average of 44846 axons per nerve (SD = 3980 axons). PMID:25281829
Analyzing microtomography data with Python and the scikit-image library.
Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan
2017-01-01
The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
Wigton, Rebekah; Radua, Jocham; Allen, Paul; Averbeck, Bruno; Meyer-Lindenberg, Andreas; McGuire, Philip; Shergill, Sukhi S.; Fusar-Poli, Paolo
2015-01-01
Background Oxytocin (OXT) plays a prominent role in social cognition and may have clinical applications for disorders such as autism, schizophrenia and social anxiety. The neural basis of its mechanism of action remains unclear. Methods We conducted a systematic literature review of placebo-controlled imaging studies using OXT as a pharmacological manipulator of brain activity. Results We identified a total of 21 studies for inclusion in our review, and after applying additional selection criteria, 11 of them were included in our fMRI voxel-based meta-analysis. The results demonstrate consistent alterations in activation of brain regions, including the temporal lobes and insula, during the processing of social stimuli, with some variation dependent on sex and task. The meta-analysis revealed significant left insular hyperactivation after OXT administration, suggesting a potential modulation of neural circuits underlying emotional processing. Limitations This quantitative review included only a limited number of studies, thus the conclusions of our analysis should be interpreted cautiously. This limited sample size precluded a more detailed exploration of potential confounding factors, such as sex or other demographic factors, that may have affected our meta-analysis. Conclusion Oxytocin has a wide range of effects over neural activity in response to social and emotional processing, which is further modulated by sex and task specificity. The magnitude of this neural activation is largest in the temporal lobes, and a meta-analysis across all tasks and both sexes showed that the left insula demonstrated the most robust activation to OXT administration. PMID:25520163
Abe, Kazuhiro; Takahashi, Toshimitsu; Takikawa, Yoriko; Arai, Hajime; Kitazawa, Shigeru
2011-10-01
Independent component analysis (ICA) can be usefully applied to functional imaging studies to evaluate the spatial extent and temporal profile of task-related brain activity. It requires no a priori assumptions about the anatomical areas that are activated or the temporal profile of the activity. We applied spatial ICA to detect a voluntary but hidden response of silent speech. To validate the method against a standard model-based approach, we used the silent speech of a tongue twister as a 'Yes' response to single questions that were delivered at given times. In the first task, we attempted to estimate one number that was chosen by a participant from 10 possibilities. In the second task, we increased the possibilities to 1000. In both tasks, spatial ICA was as effective as the model-based method for determining the number in the subject's mind (80-90% correct per digit), but spatial ICA outperformed the model-based method in terms of time, especially in the 1000-possibility task. In the model-based method, calculation time increased by 30-fold, to 15 h, because of the necessity of testing 1000 possibilities. In contrast, the calculation time for spatial ICA remained as short as 30 min. In addition, spatial ICA detected an unexpected response that occurred by mistake. This advantage was validated in a third task, with 13 500 possibilities, in which participants had the freedom to choose when to make one of four responses. We conclude that spatial ICA is effective for detecting the onset of silent speech, especially when it occurs unexpectedly. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Loos, G; Moreau, J; Miroir, J; Benhaïm, C; Biau, J; Caillé, C; Bellière, A; Lapeyre, M
2013-10-01
The various image-guided radiotherapy techniques raise the question of how to achieve the control of patient positioning before irradiation session and sharing of tasks between radiation oncologists and radiotherapy technicians. We have put in place procedures and operating methods to make a partial delegation of tasks to radiotherapy technicians and secure the process in three situations: control by orthogonal kV imaging (kV-kV) of bony landmarks, control by kV-kV imaging of intraprostatic fiducial goldmarkers and control by cone beam CT (CBCT) imaging for prostate cancer. Significant medical overtime is required to control these three IGRT techniques. Because of their competence in imaging, these daily controls can be delegated to radiotherapy technicians. However, to secure the process, initial training and regular evaluation are essential. The analysis of the comparison of the use of kV/kV on bone structures allowed us to achieve a partial delegation of control to radiotherapy technicians. Controlling the positioning of the prostate through the use and automatic registration of fiducial goldmarkers allows better tracking of the prostate and can be easily delegated to radiotherapy technicians. The analysis of the use of daily cone beam CT for patients treated with intensity modulated irradiation is underway, and a comparison of practices between radiotherapy technicians and radiation oncologists is ongoing to know if a partial delegation of this control is possible. Copyright © 2013. Published by Elsevier SAS.
Spectral imaging perspective on cytomics.
Levenson, Richard M
2006-07-01
Cytomics involves the analysis of cellular morphology and molecular phenotypes, with reference to tissue architecture and to additional metadata. To this end, a variety of imaging and nonimaging technologies need to be integrated. Spectral imaging is proposed as a tool that can simplify and enrich the extraction of morphological and molecular information. Simple-to-use instrumentation is available that mounts on standard microscopes and can generate spectral image datasets with excellent spatial and spectral resolution; these can be exploited by sophisticated analysis tools. This report focuses on brightfield microscopy-based approaches. Cytological and histological samples were stained using nonspecific standard stains (Giemsa; hematoxylin and eosin (H&E)) or immunohistochemical (IHC) techniques employing three chromogens plus a hematoxylin counterstain. The samples were imaged using the Nuance system, a commercially available, liquid-crystal tunable-filter-based multispectral imaging platform. The resulting data sets were analyzed using spectral unmixing algorithms and/or learn-by-example classification tools. Spectral unmixing of Giemsa-stained guinea-pig blood films readily classified the major blood elements. Machine-learning classifiers were also successful at the same task, as well in distinguishing normal from malignant regions in a colon-cancer example, and in delineating regions of inflammation in an H&E-stained kidney sample. In an example of a multiplexed ICH sample, brown, red, and blue chromogens were isolated into separate images without crosstalk or interference from the (also blue) hematoxylin counterstain. Cytomics requires both accurate architectural segmentation as well as multiplexed molecular imaging to associate molecular phenotypes with relevant cellular and tissue compartments. Multispectral imaging can assist in both these tasks, and conveys new utility to brightfield-based microscopy approaches. Copyright 2006 International Society for Analytical Cytology.
Computer vision applications for coronagraphic optical alignment and image processing.
Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A
2013-05-10
Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.
Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R; Tulchin-Francis, Kirsten; Shierk, Angela; Roberts, Heather; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George
2014-10-01
Recent studies have demonstrated functional near-infrared spectroscopy (fNIRS) to be a viable and sensitive method for imaging sensorimotor cortex activity in children with cerebral palsy (CP). However, during unilateral finger tapping, children with CP often exhibit unintended motions in the nontapping hand, known as mirror motions, which confuse the interpretation of resulting fNIRS images. This work presents a method for separating some of the mirror motion contributions to fNIRS images and demonstrates its application to fNIRS data from four children with CP performing a finger-tapping task with mirror motions. Finger motion and arm muscle activity were measured simultaneously with fNIRS signals using motion tracking and electromyography (EMG), respectively. Subsequently, subject-specific regressors were created from the motion capture or EMG data and independent component analysis was combined with a general linear model to create an fNIRS image representing activation due to the tapping hand and one image representing activation due to the mirror hand. The proposed method can provide information on how mirror motions contribute to fNIRS images, and in some cases, it helps remove mirror motion contamination from the tapping hand activation images.
Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R.; Tulchin-Francis, Kirsten; Shierk, Angela; Roberts, Heather; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George
2014-01-01
Abstract. Recent studies have demonstrated functional near-infrared spectroscopy (fNIRS) to be a viable and sensitive method for imaging sensorimotor cortex activity in children with cerebral palsy (CP). However, during unilateral finger tapping, children with CP often exhibit unintended motions in the nontapping hand, known as mirror motions, which confuse the interpretation of resulting fNIRS images. This work presents a method for separating some of the mirror motion contributions to fNIRS images and demonstrates its application to fNIRS data from four children with CP performing a finger-tapping task with mirror motions. Finger motion and arm muscle activity were measured simultaneously with fNIRS signals using motion tracking and electromyography (EMG), respectively. Subsequently, subject-specific regressors were created from the motion capture or EMG data and independent component analysis was combined with a general linear model to create an fNIRS image representing activation due to the tapping hand and one image representing activation due to the mirror hand. The proposed method can provide information on how mirror motions contribute to fNIRS images, and in some cases, it helps remove mirror motion contamination from the tapping hand activation images. PMID:26157980
DOT National Transportation Integrated Search
2012-02-01
One of the most important tasks in maintaining transportation facilities such as highways : and streets is the evaluation of the existing condition. Visual evaluation by human : inspectors is subjective in nature, therefore has issues of consistency ...
Lender, Anja; Meule, Adrian; Rinck, Mike; Brockmeyer, Timo; Blechert, Jens
2018-06-01
Strong implicit responses to food have evolved to avoid energy depletion but contribute to overeating in today's affluent environments. The Approach-Avoidance Task (AAT) supposedly assesses implicit biases in response to food stimuli: Participants push pictures on a monitor "away" or pull them "near" with a joystick that controls a corresponding image zoom. One version of the task couples movement direction with image content-independent features, for example, pulling blue-framed images and pushing green-framed images regardless of content ('irrelevant feature version'). However, participants might selectively attend to this feature and ignore image content and, thus, such a task setup might underestimate existing biases. The present study tested this attention account by comparing two irrelevant feature versions of the task with either a more peripheral (image frame color: green vs. blue) or central (small circle vs. cross overlaid over the image content) image feature as response instruction to a 'relevant feature version', in which participants responded to the image content, thus making it impossible to ignore that content. Images of chocolate-containing foods and of objects were used, and several trait and state measures were acquired to validate the obtained biases. Results revealed a robust approach bias towards food only in the relevant feature condition. Interestingly, a positive correlation with state chocolate craving during the task was found when all three conditions were combined, indicative of criterion validity of all three versions. However, no correlations were found with trait chocolate craving. Results provide a strong case for the relevant feature version of the AAT for bias measurement. They also point to several methodological avenues for future research around selective attention in the irrelevant versions and task validity regarding trait vs. state variables. Copyright © 2018 Elsevier Ltd. All rights reserved.
An fMRI study of sex differences in regional activation to a verbal and a spatial task.
Gur, R C; Alsop, D; Glahn, D; Petty, R; Swanson, C L; Maldjian, J A; Turetsky, B I; Detre, J A; Gee, J; Gur, R E
2000-09-01
Sex differences in cognitive performance have been documented, women performing better on some phonological tasks and men on spatial tasks. An earlier fMRI study suggested sex differences in distributed brain activation during phonological processing, with bilateral activation seen in women while men showed primarily left-lateralized activation. This blood oxygen level-dependent fMRI study examined sex differences (14 men, 13 women) in activation for a spatial task (judgment of line orientation) compared to a verbal-reasoning task (analogies) that does not typically show sex differences. Task difficulty was manipulated. Hypothesized ROI-based analysis documented the expected left-lateralized changes for the verbal task in the inferior parietal and planum temporal regions in both men and women, but only men showed right-lateralized increase for the spatial task in these regions. Image-based analysis revealed a distributed network of cortical regions activated by the tasks, which consisted of the lateral frontal, medial frontal, mid-temporal, occipitoparietal, and occipital regions. The activation was more left lateralized for the verbal and more right for the spatial tasks, but men also showed some left activation for the spatial task, which was not seen in women. Increased task difficulty produced more distributed activation for the verbal and more circumscribed activation for the spatial task. The results suggest that failure to activate the appropriate hemisphere in regions directly involved in task performance may explain certain sex differences in performance. They also extend, for a spatial task, the principle that bilateral activation in a distributed cognitive system underlies sex differences in performance. Copyright 2000 Academic Press.
Digital processing of Mariner 9 television data.
NASA Technical Reports Server (NTRS)
Green, W. B.; Seidman, J. B.
1973-01-01
The digital image processing performed by the Image Processing Laboratory (IPL) at JPL in support of the Mariner 9 mission is summarized. The support is divided into the general categories of image decalibration (the removal of photometric and geometric distortions from returned imagery), computer cartographic projections in support of mapping activities, and adaptive experimenter support (flexible support to provide qualitative digital enhancements and quantitative data reduction of returned imagery). Among the tasks performed were the production of maximum discriminability versions of several hundred frames to support generation of a geodetic control net for Mars, and special enhancements supporting analysis of Phobos and Deimos images.
Documentation of operational protocol for the use of MAMA software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Daniel S.
2016-01-21
Image analysis of Scanning Electron Microscope (SEM) micrographs is a complex process that can vary significantly between analysts. The factors causing the variation are numerous, and the purpose of Task 2b is to develop and test a set of protocols designed to minimize variation in image analysis between different analysts and laboratories, specifically using the MAMA software package, Version 2.1. The protocols were designed to be “minimally invasive”, so that expert SEM operators will not be overly constrained in the way they analyze particle samples. The protocols will be tested using a round-robin approach where results from expert SEM usersmore » at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory, Savannah River National Laboratory, and the National Institute of Standards and Testing will be compared. The variation of the results will be used to quantify uncertainty in the particle image analysis process. The round-robin exercise will proceed with 3 levels of rigor, each with their own set of protocols, as described below in Tasks 2b.1, 2b.2, and 2b.3. The uncertainty will be developed using NIST standard reference material SRM 1984 “Thermal Spray Powder – Particle Size Distribution, Tungsten Carbide/Cobalt (Acicular)” [Reference 1]. Full details are available in the Certificate of Analysis, posted on the NIST website (http://www.nist.gov/srm/).« less
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Links, Jonathan M.; Frey, Eric C.
2016-03-01
In SPECT imaging, collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. In this paper, we seek the collimator with the optimal tradeoff between image noise and resolution with respect to performance on two tasks related to myocardial perfusion SPECT: perfusion defect detection and joint detection and localization. We used the Ideal Observer (IO) operating on realistic background-known-statistically (BKS) and signal-known-exactly (SKE) data. The areas under the receiver operating characteristic (ROC) and localization ROC (LROC) curves (AUCd, AUCd+l), respectively, were used as the figures of merit for both tasks. We used a previously developed population of 54 phantoms based on the eXtended Cardiac Torso Phantom (XCAT) that included variations in gender, body size, heart size and subcutaneous adipose tissue level. For each phantom, organ uptakes were varied randomly based on distributions observed in patient data. We simulated perfusion defects at six different locations with extents and severities of 10% and 25%, respectively, which represented challenging but clinically relevant defects. The extent and severity are, respectively, the perfusion defect’s fraction of the myocardial volume and reduction of uptake relative to the normal myocardium. Projection data were generated using an analytical projector that modeled attenuation, scatter, and collimator-detector response effects, a 9% energy resolution at 140 keV, and a 4 mm full-width at half maximum (FWHM) intrinsic spatial resolution. We investigated a family of eight parallel-hole collimators that spanned a large range of sensitivity-resolution tradeoffs. For each collimator and defect location, the IO test statistics were computed using a Markov Chain Monte Carlo (MCMC) method for an ensemble of 540 pairs of defect-present and -absent images that included the aforementioned anatomical and uptake variability. Sets of test statistics were computed for both tasks and analyzed using ROC and LROC analysis methodologies. The results of this study suggest that collimators with somewhat poorer resolution and higher sensitivity than those of a typical low-energy high-resolution (LEHR) collimator were optimal for both defect detection and joint detection and localization tasks in myocardial perfusion SPECT for the range of defect sizes investigated. This study also indicates that optimizing instrumentation for a detection task may provide near-optimal performance on the more challenging detection-localization task.
NASA Astrophysics Data System (ADS)
Borodinov, A. A.; Myasnikov, V. V.
2018-04-01
The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.
Recent developments in imaging system assessment methodology, FROC analysis and the search model.
Chakraborty, Dev P
2011-08-21
A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.
Classification images for localization performance in ramp-spectrum noise.
Abbey, Craig K; Samuelson, Frank W; Zeng, Rongping; Boone, John M; Eckstein, Miguel P; Myers, Kyle
2018-05-01
This study investigates forced localization of targets in simulated images with statistical properties similar to trans-axial sections of x-ray computed tomography (CT) volumes. A total of 24 imaging conditions are considered, comprising two target sizes, three levels of background variability, and four levels of frequency apodization. The goal of the study is to better understand how human observers perform forced-localization tasks in images with CT-like statistical properties. The transfer properties of CT systems are modeled by a shift-invariant transfer function in addition to apodization filters that modulate high spatial frequencies. The images contain noise that is the combination of a ramp-spectrum component, simulating the effect of acquisition noise in CT, and a power-law component, simulating the effect of normal anatomy in the background, which are modulated by the apodization filter as well. Observer performance is characterized using two psychophysical techniques: efficiency analysis and classification image analysis. Observer efficiency quantifies how much diagnostic information is being used by observers to perform a task, and classification images show how that information is being accessed in the form of a perceptual filter. Psychophysical studies from five subjects form the basis of the results. Observer efficiency ranges from 29% to 77% across the different conditions. The lowest efficiency is observed in conditions with uniform backgrounds, where significant effects of apodization are found. The classification images, estimated using smoothing windows, suggest that human observers use center-surround filters to perform the task, and these are subjected to a number of subsequent analyses. When implemented as a scanning linear filter, the classification images appear to capture most of the observer variability in efficiency (r 2 = 0.86). The frequency spectra of the classification images show that frequency weights generally appear bandpass in nature, with peak frequency and bandwidth that vary with statistical properties of the images. In these experiments, the classification images appear to capture important features of human-observer performance. Frequency apodization only appears to have a significant effect on performance in the absence of anatomical variability, where the observers appear to underweight low spatial frequencies that have relatively little noise. Frequency weights derived from the classification images generally have a bandpass structure, with adaptation to different conditions seen in the peak frequency and bandwidth. The classification image spectra show relatively modest changes in response to different levels of apodization, with some evidence that observers are attempting to rebalance the apodized spectrum presented to them. © 2018 American Association of Physicists in Medicine.
Intensity-Based Registration for Lung Motion Estimation
NASA Astrophysics Data System (ADS)
Cao, Kunlin; Ding, Kai; Amelon, Ryan E.; Du, Kaifang; Reinhardt, Joseph M.; Raghavan, Madhavan L.; Christensen, Gary E.
Image registration plays an important role within pulmonary image analysis. The task of registration is to find the spatial mapping that brings two images into alignment. Registration algorithms designed for matching 4D lung scans or two 3D scans acquired at different inflation levels can catch the temporal changes in position and shape of the region of interest. Accurate registration is critical to post-analysis of lung mechanics and motion estimation. In this chapter, we discuss lung-specific adaptations of intensity-based registration methods for 3D/4D lung images and review approaches for assessing registration accuracy. Then we introduce methods for estimating tissue motion and studying lung mechanics. Finally, we discuss methods for assessing and quantifying specific volume change, specific ventilation, strain/ stretch information and lobar sliding.
Task performance in astronomical adaptive optics
NASA Astrophysics Data System (ADS)
Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, J. C.; Caucci, Luca
2006-06-01
In objective or task-based assessment of image quality, figures of merit are defined by the performance of some specific observer on some task of scientific interest. This methodology is well established in medical imaging but is just beginning to be applied in astronomy. In this paper we survey the theory needed to understand the performance of ideal or ideal-linear (Hotelling) observers on detection tasks with adaptive-optical data. The theory is illustrated by discussing its application to detection of exoplanets from a sequence of short-exposure images.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Frontal–Occipital Connectivity During Visual Search
Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas
2012-01-01
Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993
Parietal and frontal object areas underlie perception of object orientation in depth.
Niimi, Ryosuke; Saneyoshi, Ayako; Abe, Reiko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko
2011-05-27
Recent studies have shown that the human parietal and frontal cortices are involved in object image perception. We hypothesized that the parietal/frontal object areas play a role in differentiating the orientations (i.e., views) of an object. By using functional magnetic resonance imaging, we compared brain activations while human observers differentiated between two object images in depth-orientation (orientation task) and activations while they differentiated the images in object identity (identity task). The left intraparietal area, right angular gyrus, and right inferior frontal areas were activated more for the orientation task than for the identity task. The occipitotemporal object areas, however, were activated equally for the two tasks. No region showed greater activation for the identity task. These results suggested that the parietal/frontal object areas encode view-dependent visual features and underlie object orientation perception. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Multi-object segmentation framework using deformable models for medical imaging analysis.
Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel
2016-08-01
Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.
Ganalyzer: A tool for automatic galaxy image analysis
NASA Astrophysics Data System (ADS)
Shamir, Lior
2011-05-01
Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.
Task relevance modulates the cortical representation of feature conjunctions in the target template.
Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan
2017-07-03
Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.
Large-area settlement pattern recognition from Landsat-8 data
NASA Astrophysics Data System (ADS)
Wieland, Marc; Pittore, Massimiliano
2016-09-01
The study presents an image processing and analysis pipeline that combines object-based image analysis with a Support Vector Machine to derive a multi-layered settlement product from Landsat-8 data over large areas. 43 image scenes are processed over large parts of Central Asia (Southern Kazakhstan, Kyrgyzstan, Tajikistan and Eastern Uzbekistan). The main tasks tackled by this work include built-up area identification, settlement type classification and urban structure types pattern recognition. Besides commonly used accuracy assessments of the resulting map products, thorough performance evaluations are carried out under varying conditions to tune algorithm parameters and assess their applicability for the given tasks. As part of this, several research questions are being addressed. In particular the influence of the improved spatial and spectral resolution of Landsat-8 on the SVM performance to identify built-up areas and urban structure types are evaluated. Also the influence of an extended feature space including digital elevation model features is tested for mountainous regions. Moreover, the spatial distribution of classification uncertainties is analyzed and compared to the heterogeneity of the building stock within the computational unit of the segments. The study concludes that the information content of Landsat-8 images is sufficient for the tested classification tasks and even detailed urban structures could be extracted with satisfying accuracy. Freely available ancillary settlement point location data could further improve the built-up area classification. Digital elevation features and pan-sharpening could, however, not significantly improve the classification results. The study highlights the importance of dynamically tuned classifier parameters, and underlines the use of Shannon entropy computed from the soft answers of the SVM as a valid measure of the spatial distribution of classification uncertainties.
NASA Astrophysics Data System (ADS)
Perner, Petra
2017-03-01
Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.
The neural correlates of learned motor acuity
Yang, Juemin; Caffo, Brian; Mazzoni, Pietro; Krakauer, John W.
2014-01-01
We recently defined a component of motor skill learning as “motor acuity,” quantified as a shift in the speed-accuracy trade-off function for a task. These shifts are primarily driven by reductions in movement variability. To determine the neural correlates of improvement in motor acuity, we devised a motor task compatible with magnetic resonance brain imaging that required subjects to make finely controlled wrist movements under visual guidance. Subjects were imaged on day 1 and day 5 while they performed this task and were trained outside the scanner on intervening days 2, 3, and 4. The potential confound of performance changes between days 1 and 5 was avoided by constraining movement time to a fixed duration. After training, subjects showed a marked increase in success rate and a reduction in trial-by-trial variability for the trained task but not for an untrained control task, without changes in mean trajectory. The decrease in variability for the trained task was associated with increased activation in contralateral primary motor and premotor cortical areas and in ipsilateral cerebellum. A global nonlocalizing multivariate analysis confirmed that learning was associated with increased overall brain activation. We suggest that motor acuity is acquired through increases in the number of neurons recruited in contralateral motor cortical areas and in ipsilateral cerebellum, which could reflect increased signal-to-noise ratio in motor output and improved state estimation for feedback corrections, respectively. PMID:24848466
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Task-Driven Orbit Design and Implementation on a Robotic C-Arm System for Cone-Beam CT.
Ouadah, S; Jacobson, M; Stayman, J W; Ehtiati, T; Weiss, C; Siewerdsen, J H
2017-03-01
This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d '. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the f z -axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the ( f y , f z ) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. For the f z -axis task, the circle + arc orbit was shown to increase d ' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d ' was increased by a factor of 1.83. This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-03-01
Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83. Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
High-Speed Real-Time Resting-State fMRI Using Multi-Slab Echo-Volumar Imaging
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Zhang, Tongsheng; Hummatov, Ruslan; Akhtari, Massoud; Chohan, Muhammad; Fisch, Bruce; Yonas, Howard
2013-01-01
We recently demonstrated that ultra-high-speed real-time fMRI using multi-slab echo-volumar imaging (MEVI) significantly increases sensitivity for mapping task-related activation and resting-state networks (RSNs) compared to echo-planar imaging (Posse et al., 2012). In the present study we characterize the sensitivity of MEVI for mapping RSN connectivity dynamics, comparing independent component analysis (ICA) and a novel seed-based connectivity analysis (SBCA) that combines sliding-window correlation analysis with meta-statistics. This SBCA approach is shown to minimize the effects of confounds, such as movement, and CSF and white matter signal changes, and enables real-time monitoring of RSN dynamics at time scales of tens of seconds. We demonstrate highly sensitive mapping of eloquent cortex in the vicinity of brain tumors and arterio-venous malformations, and detection of abnormal resting-state connectivity in epilepsy. In patients with motor impairment, resting-state fMRI provided focal localization of sensorimotor cortex compared with more diffuse activation in task-based fMRI. The fast acquisition speed of MEVI enabled segregation of cardiac-related signal pulsation using ICA, which revealed distinct regional differences in pulsation amplitude and waveform, elevated signal pulsation in patients with arterio-venous malformations and a trend toward reduced pulsatility in gray matter of patients compared with healthy controls. Mapping cardiac pulsation in cortical gray matter may carry important functional information that distinguishes healthy from diseased tissue vasculature. This novel fMRI methodology is particularly promising for mapping eloquent cortex in patients with neurological disease, having variable degree of cooperation in task-based fMRI. In conclusion, ultra-high-real-time speed fMRI enhances the sensitivity of mapping the dynamics of resting-state connectivity and cerebro-vascular pulsatility for clinical and neuroscience research applications. PMID:23986677
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
New approach for cognitive analysis and understanding of medical patterns and visualizations
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Tadeusiewicz, Ryszard
2003-11-01
This paper presents new opportunities for applying linguistic description of the picture merit content and AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted from the image using linguistic methods and expectations taken from the representaion of the medical knowledge, it is possible to understand the merit content of the image even if teh form of the image is very different from any known pattern. This article proves that structural techniques of artificial intelligence may be applied in the case of tasks related to automatic classification and machine perception based on semantic pattern content in order to determine the semantic meaning of the patterns. In the paper are described some examples presenting ways of applying such techniques in the creation of cognitive vision systems for selected classes of medical images. On the base of scientific research described in the paper we try to build some new systems for collecting, storing, retrieving and intelligent interpreting selected medical images especially obtained in radiological and MRI examinations.
Object recognition based on Google's reverse image search and image similarity
NASA Astrophysics Data System (ADS)
Horváth, András.
2015-12-01
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
Rover imaging system for the Mars rover/sample return mission
NASA Technical Reports Server (NTRS)
1993-01-01
In the past year, the conceptual design of a panoramic imager for the Mars Environmental Survey (MESUR) Pathfinder was finished. A prototype camera was built and its performace in the laboratory was tested. The performance of this camera was excellent. Based on this work, we have recently proposed a small, lightweight, rugged, and highly capable Mars Surface Imager (MSI) instrument for the MESUR Pathfinder mission. A key aspect of our approach to optimization of the MSI design is that we treat image gathering, coding, and restoration as a whole, rather than as separate and independent tasks. Our approach leads to higher image quality, especially in the representation of fine detail with good contrast and clarity, without increasing either the complexity of the camera or the amount of data transmission. We have made significant progress over the past year in both the overall MSI system design and in the detailed design of the MSI optics. We have taken a simple panoramic camera and have upgraded it substantially to become a prototype of the MSI flight instrument. The most recent version of the camera utilizes miniature wide-angle optics that image directly onto a 3-color, 2096-element CCD line array. There are several data-taking modes, providing resolution as high as 0.3 mrad/pixel. Analysis tasks that were performed or that are underway with the test data from the prototype camera include the following: construction of 3-D models of imaged scenes from stereo data, first for controlled scenes and later for field scenes; and checks on geometric fidelity, including alignment errors, mast vibration, and oscillation in the drive system. We have outlined a number of tasks planned for Fiscal Year '93 in order to prepare us for submission of a flight instrument proposal for MESUR Pathfinder.
A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images
NASA Astrophysics Data System (ADS)
Lopez, H.; Loew, M. H.
1988-06-01
Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.
Language Mapping Using fMRI and Direct Cortical Stimulation for Brain Tumor Surgery
Brennan, Nicole Petrovich; Peck, Kyung K.; Holodny, Andrei
2016-01-01
Language functional magnetic resonance imaging for neurosurgical planning is a useful but nuanced technique. Consideration of primary and secondary language anatomy, task selection, and data analysis choices all impact interpretation. In the following chapter, we consider practical considerations and nuances alike for language functional magnetic resonance imaging in the support of and comparison with the neurosurgical gold standard, direct cortical stimulation. Pitfalls and limitations are discussed. PMID:26848555
Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images
Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time. Method The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm. Results Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed. Conclusion Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies. PMID:27295428
NASA Technical Reports Server (NTRS)
Natesh, R.; Smith, J. M.; Qidwai, H. A.
1979-01-01
The various steps involved in the chemical polishing and etching of silicon samples are described. Data on twins, dislocation pits, and grain boundaries from thirty-one (31) silicon sample are also discussed. A brief review of the changes made to upgrade the image analysis system is included.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Applications of active microwave imagery
NASA Technical Reports Server (NTRS)
Weber, F. P.; Childs, L. F.; Gilbert, R.; Harlan, J. C.; Hoffer, R. M.; Miller, J. M.; Parsons, J.; Polcyn, F.; Schardt, B. B.; Smith, J. L.
1978-01-01
The following topics were discussed in reference to active microwave applications: (1) Use of imaging radar to improve the data collection/analysis process; (2) Data collection tasks for radar that other systems will not perform; (3) Data reduction concepts; and (4) System and vehicle parameters: aircraft and spacecraft.
Automated Tracking of Cell Migration with Rapid Data Analysis.
DuChez, Brian J
2017-09-01
Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Liu, Yijin; Meirer, Florian; Williams, Phillip A.; Wang, Junyue; Andrews, Joy C.; Pianetta, Piero
2012-01-01
Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available. PMID:22338691
Analysis and use of VAS satellite data
NASA Technical Reports Server (NTRS)
Fuelberg, Henry E.; Andrews, Mark J.; Beven, John L., II; Moore, Steven R.; Muller, Bradley M.
1989-01-01
Four interrelated investigations have examined the analysis and use of VAS satellite data. A case study of VAS-derived mesoscale stability parameters suggested that they would have been a useful supplement to conventional data in the forecasting of thunderstorms on the day of interest. A second investigation examined the roles of first guess and VAS radiometric data in producing sounding retrievals. Broad-scale patterns of the first guess, radiances, and retrievals frequently were similar, whereas small-scale retrieval features, especially in the dew points, were often of uncertain origin. Two research tasks considered 6.7 micron middle tropospheric water vapor imagery. The first utilized radiosonde data to examine causes for two areas of warm brightness temperature. Subsidence associated with a translating jet streak was important. The second task involving water vapor imagery investigated simulated imagery created from LAMPS output and a radiative transfer algorithm. Simulated image patterns were found to compare favorably with those actually observed by VAS. Furthermore, the mass/momentum fields from LAMPS were powerful tools for understanding causes for the image configurations.
De Guibert, Clément; Maumet, Camille; Jannin, Pierre; Ferré, Jean-Christophe; Tréguier, Catherine; Barillot, Christian; Le Rumeur, Elisabeth; Allaire, Catherine; Biraben, Arnaud
2011-01-01
Atypical functional lateralization and specialization for language have been proposed to account for developmental language disorders, yet results from functional neuroimaging studies are sparse and inconsistent. This functional magnetic resonance imaging study compared children with a specific subtype of specific language impairment affecting structural language (n=21), to a matched group of typically-developing children using a panel of four language tasks neither requiring reading nor metalinguistic skills, including two auditory lexico-semantic tasks (category fluency and responsive naming) and two visual phonological tasks based on picture naming. Data processing involved normalizing the data with respect to a matched pairs pediatric template, groups and between-groups analysis, and laterality indexes assessment within regions of interest using single and combined task analysis. Children with specific language impairment exhibited a significant lack of left lateralization in all core language regions (inferior frontal gyrus-opercularis, inferior frontal gyrus-triangularis, supramarginal gyrus, superior temporal gyrus), across single or combined task analysis, but no difference of lateralization for the rest of the brain. Between-group comparisons revealed a left hypoactivation of Wernicke’s area at the posterior superior temporal/supramarginal junction during the responsive naming task, and a right hyperactivation encompassing the anterior insula with adjacent inferior frontal gyrus and the head of the caudate nucleus during the first phonological task. This study thus provides evidence that this specific subtype of specific language impairment is associated with atypical lateralization and functioning of core language areas. PMID:21719430
Planetary image conversion task
NASA Technical Reports Server (NTRS)
Martin, M. D.; Stanley, C. L.; Laughlin, G.
1985-01-01
The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.
Evolutionary image simplification for lung nodule classification with convolutional neural networks.
Lückehe, Daniel; von Voigt, Gabriele
2018-05-29
Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.
Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M
2016-04-01
Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.
Templeton, Justin P; Struebing, Felix L; Lemmon, Andrew; Geisert, Eldon E
2014-11-01
The present article introduces a new and easy to use counting application for the Apple iPad. The application "ImagePAD" takes advantage of the advanced user interface features offered by the Apple iOS platform, simplifying the rather tedious task of quantifying features in anatomical studies. For example, the image under analysis can be easily panned and zoomed using iOS-supported multi-touch gestures without losing the spatial context of the counting task, which is extremely important for ensuring count accuracy. This application allows one to quantify up to 5 different types of objects in a single field and output the data in a tab-delimited format for subsequent analysis. We describe two examples of the use of the application: quantifying axons in the optic nerve of the C57BL/6J mouse and determining the percentage of cells labeled with NeuN or ChAT in the retinal ganglion cell layer. For the optic nerve, contiguous images at 60× magnification were taken and transferred onto an Apple iPad. Axons were counted by tapping on the touch-sensitive screen using ImagePAD. Nine optic nerves were sampled and the number of axons in the nerves ranged from 38,872 axons to 50,196 axons with an average of 44,846 axons per nerve (SD = 3980 axons). Copyright © 2014 Elsevier Ltd. All rights reserved.
Rotation Covariant Image Processing for Biomedical Applications
Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255
NASA Astrophysics Data System (ADS)
Ushenko, Yu. O.; Dubolazov, O. V.; Ushenko, V. O.; Zhytaryuk, V. G.; Prydiy, O. G.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
In this paper, we present the results of a statistical analysis of polarization-interference images of optically thin histological sections of biological tissues and polycrystalline films of biological fluids of human organs. A new analytical parameter is introduced-the local contrast of the interference pattern in the plane of a polarizationinhomogeneous microscopic image of a biological preparation. The coordinate distributions of the given parameter and the sets of statistical moments of the first-fourth order that characterize these distributions are determined. On this basis, the differentiation of degenerative-dystrophic changes in the myocardium and the polycrystalline structure of the synovial fluid of the human knee with different pathologies is realized.
Development of a 32-bit UNIX-based ELAS workstation
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.
1987-01-01
A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.
Batch settling curve registration via image data modeling.
Derlon, Nicolas; Thürlimann, Christian; Dürrenmatt, David; Villez, Kris
2017-05-01
To this day, obtaining reliable characterization of sludge settling properties remains a challenging and time-consuming task. Without such assessments however, optimal design and operation of secondary settling tanks is challenging and conservative approaches will remain necessary. With this study, we show that automated sludge blanket height registration and zone settling velocity estimation is possible thanks to analysis of images taken during batch settling experiments. The experimental setup is particularly interesting for practical applications as it consists of off-the-shelf components only, no moving parts are required, and the software is released publicly. Furthermore, the proposed multivariate shape constrained spline model for image analysis appears to be a promising method for reliable sludge blanket height profile registration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Image Science and Analysis Group Spacecraft Damage Detection/Characterization
NASA Technical Reports Server (NTRS)
Wheaton, Ira M., Jr.
2010-01-01
This project consisted of several tasks that could be served by an intern to assist the ISAG in detecting damage to spacecrafts during missions. First, this project focused on supporting the Micrometeoroid Orbital Debris (MMOD) damage detection and assessment for the Hubble Space Telescope (HST) using imagery from the last two HST Shuttle servicing missions. In this project, we used coordinates of two windows on the Shuttle Aft flight deck from where images were taken and the coordinates of three ID points in order to calculate the distance from each window to the three points. Then, using the specifications from the camera used, we calculated the image scale in pixels per inch for planes parallel to and planes in the z-direction to the image plane (shown in Table 1). This will help in the future for calculating measurements of objects in the images. Next, tabulation and statistical analysis were conducted for screening results (shown in Table 2) of imagery with Orion Thermal Protection System (TPS) damage. Using the Microsoft Excel CRITBINOM function and Goal Seek, the probabilities of detection of damage to different shuttle tiles were calculated as shown in Table 3. Using developed measuring tools, volume and area measurements will be created from 3D models of Orion TPS damage. Last, mathematical expertise was provided to the Photogrammetry Team. These mathematical tasks consisted of developing elegant image space error equations for observations along 3D lines, circles, planes, etc. and checking proofs for minimal sets of sufficient multi-linear constraints. Some of the processes and resulting equations are displayed in Figure 1.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Reliability of functional MR imaging with word-generation tasks for mapping Broca's area.
Brannen, J H; Badie, B; Moritz, C H; Quigley, M; Meyerand, M E; Haughton, V M
2001-10-01
Functional MR (fMR) imaging of word generation has been used to map Broca's area in some patients selected for craniotomy. The purpose of this study was to measure the reliability, precision, and accuracy of word-generation tasks to identify Broca's area. The Brodmann areas activated during performance of word-generation tasks were tabulated in 34 consecutive patients referred for fMR imaging mapping of language areas. In patients performing two iterations of the letter word-generation tasks, test-retest reliability was quantified by using the concurrence ratio (CR), or the number of voxels activated by each iteration in proportion to the average number of voxels activated from both iterations of the task. Among patients who also underwent category or antonym word generation or both, the similarity of the activation from each task was assessed with the CR. In patients who underwent electrocortical stimulation (ECS) mapping of speech function during craniotomy while awake, the sites with speech function were compared with the locations of activation found during fMR imaging of word generation. In 31 of 34 patients, activation was identified in the inferior frontal gyri or middle frontal gyri or both in Brodmann areas 9, 44, 45, or 46, unilaterally or bilaterally, with one or more of the tasks. Activation was noted in the same gyri when the patient performed a second iteration of the letter word-generation task or second task. The CR for pixel precision in a single section averaged 49%. In patients who underwent craniotomy while awake, speech areas located with ECS coincided with areas of the brain activated during a word-generation task. fMR imaging with word-generation tasks produces technically satisfactory maps of Broca's area, which localize the area accurately and reliably.
Xiao, Fu-Long; Gao, Pei-Yi; Qian, Tian-Yi; Sui, Bin-Bin; Xue, Jing; Zhou, Jian; Lin, Yan
2017-05-01
Functional magnetic resonance imaging (fMRI) mapping can present the activated cortical area during movement, while little is known about precise location in facial and tongue movements. To investigate the representation of facial and tongue movements by task fMRI. Twenty right-handed healthy subjects were underwent block design task fMRI examination. Task movements included lip pursing, cheek bulging, grinning and vertical tongue excursion. Statistical parametric mapping (SPM8) was applied to analysis the data. One-sample t-test was used to calculate the common activation area between facial and tongue movements. Also, paired t-test was used to test for areas of over- or underactivation in tongue movement compared with each group of facial movements. The common areas within facial and tongue movements suggested the similar motor circuits of activation in both movements. Prior activation in tongue movement was situated laterally and inferiorly in sensorimotor area relative to facial movements. Prior activation of tongue movement was investigated in left superior parietal lobe relative to lip pursing. Also, prior activation in bilateral cuneus lobe in grinning compared with tongue movement was detected. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Effects of spatial frequency bands on perceptual decision: it is not the stimuli but the comparison.
Rotshtein, Pia; Schofield, Andrew; Funes, María J; Humphreys, Glyn W
2010-08-24
Observers performed three between- and two within-category perceptual decisions with hybrid stimuli comprising low and high spatial frequency (SF) images. We manipulated (a) attention to, and (b) congruency of information in the two SF bands. Processing difficulty of the different SF bands varied across different categorization tasks: house-flower, face-house, and valence decisions were easier when based on high SF bands, while flower-face and gender categorizations were easier when based on low SF bands. Larger interference also arose from response relevant distracters that were presented in the "preferred" SF range of the task. Low SF effects were facilitated by short exposure durations. The results demonstrate that decisions are affected by an interaction of task and SF range and that the information from the non-attended SF range interfered at the decision level. A further analysis revealed that overall differences in the statistics of image features, in particular differences of orientation information between two categories, were associated with decision difficulty. We concluded that the advantage of using information from one SF range over another depends on the specific task requirements that built on the differences of the statistical properties between the compared categories.
Image Processing and Computer Aided Diagnosis in Computed Tomography of the Breast
2007-03-01
TERMS breast imaging, breast CT, scatter compensation, denoising, CAD , Cone-beam CT 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...clinical projection images. The CAD tool based on signal known exactly (SKE) scenario is under development. Task 6: Test and compare the...performances of the CAD developed in Task 5 applied to processed projection data from Task 1 with the CAD performance on the projection data without Bayesian
PACS 2000: quality control using the task allocation chart
NASA Astrophysics Data System (ADS)
Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.
2000-05-01
Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.
NDSI products system based on Hadoop platform
NASA Astrophysics Data System (ADS)
Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui
2015-12-01
Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Neale, Chris; Johnston, Patrick; Hughes, Matthew; Scholey, Andrew
2015-01-01
The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.
A Novel Device for Grasping Assessment during Functional Tasks: Preliminary Results
Rocha, Ana Carolinne Portela; Tudella, Eloisa; Pedro, Leonardo M.; Appel, Viviane Cristina Roma; da Silva, Louise Gracelli Pereira; Caurin, Glauco Augusto de Paula
2016-01-01
This paper presents a methodology and first results obtained in a study with a novel device that allows the analysis of grasping quality. Such a device is able to acquire motion information of upper limbs allowing kinetic of manipulation analysis as well. A pilot experiment was carried out with six groups of typically developing children aged between 5 and 10 years, with seven to eight children in each one. The device, designed to emulate a glass, has an optical system composed by one digital camera and a special convex mirror that together allow image acquisition of grasping hand posture when it is grasped and manipulated. It also carries an Inertial Measurement Unit that captures motion data as acceleration, orientation, and angular velocities. The novel instrumented object is used in our approach to evaluate functional tasks performance in quantitative terms. During tests, each child was invited to grasp the cylindrical part of the device that was placed on the top of a table, simulating the task of drinking a glass of water. In the sequence, the child was oriented to transport the device back to the starting position and release it. The task was repeated three times for each child. A grasping hand posture evaluation is presented as an example to evaluate grasping quality. Additionally, motion patterns obtained with the trials performed with the different groups are presented and discussed. This device is attractive due to its portable characteristics, the small size, and its ability to evaluate grasping form. The results may be also useful to analyze the evolution of the rehabilitation process through reach-to-grasping movement and the grasping images analysis. PMID:26942178
McDonald, Amalia R; Muraskin, Jordan; Dam, Nicholas T Van; Froehlich, Caroline; Puccio, Benjamin; Pellman, John; Bauer, Clemens C C; Akeyson, Alexis; Breland, Melissa M; Calhoun, Vince D; Carter, Steven; Chang, Tiffany P; Gessner, Chelsea; Gianonne, Alyssa; Giavasis, Steven; Glass, Jamie; Homann, Steven; King, Margaret; Kramer, Melissa; Landis, Drew; Lieval, Alexis; Lisinski, Jonathan; Mackay-Brandt, Anna; Miller, Brittny; Panek, Laura; Reed, Hayley; Santiago, Christine; Schoell, Eszter; Sinnig, Richard; Sital, Melissa; Taverna, Elise; Tobe, Russell; Trautman, Kristin; Varghese, Betty; Walden, Lauren; Wang, Runtang; Waters, Abigail B; Wood, Dylan C; Castellanos, F Xavier; Leventhal, Bennett; Colcombe, Stanley J; LaConte, Stephen; Milham, Michael P; Craddock, R Cameron
2017-02-01
This data descriptor describes a repository of openly shared data from an experiment to assess inter-individual differences in default mode network (DMN) activity. This repository includes cross-sectional functional magnetic resonance imaging (fMRI) data from the Multi Source Interference Task, to assess DMN deactivation, the Moral Dilemma Task, to assess DMN activation, a resting state fMRI scan, and a DMN neurofeedback paradigm, to assess DMN modulation, along with accompanying behavioral and cognitive measures. We report technical validation from n=125 participants of the final targeted sample of 180 participants. Each session includes acquisition of one whole-brain anatomical scan and whole-brain echo-planar imaging (EPI) scans, acquired during the aforementioned tasks and resting state. The data includes several self-report measures related to perseverative thinking, emotion regulation, and imaginative processes, along with a behavioral measure of rapid visual information processing. Technical validation of the data confirms that the tasks deactivate and activate the DMN as expected. Group level analysis of the neurofeedback data indicates that the participants are able to modulate their DMN with considerable inter-subject variability. Preliminary analysis of behavioral responses and specifically self-reported sleep indicate that as many as 73 participants may need to be excluded from an analysis depending on the hypothesis being tested. The present data are linked to the enhanced Nathan Kline Institute, Rockland Sample and builds on the comprehensive neuroimaging and deep phenotyping available therein. As limited information is presently available about individual differences in the capacity to directly modulate the default mode network, these data provide a unique opportunity to examine DMN modulation ability in relation to numerous phenotypic characteristics. Copyright © 2016 Elsevier Inc. All rights reserved.
Computerized Analysis of MR and Ultrasound Images of Breast Lesions
2001-07-01
Although general rules for the differentiation between benign and malignant mammographically identified breast lesions exist, considerable...round-robin runs yielded A(sub z) values of 0.94 and 0.87 in the task of distinguishing between benign and malignant lesions in the entire database
Computerized Analysis of MR and Ultrasound Images of Breast Lesions
2000-07-01
Although general rules for the differentiation between benign and malignant mammographically identified breast lesions exist, considerable...round-robin runs yielded Az values of 0.94 and 0.87 in the task of distinguishing between benign and malignant lesions in the entire database and the
Sun, Li; Liang, Peipeng; Jia, Xiuqin; Qi, Zhigang; Li, Kuncheng
2014-01-01
Recent neuroimaging studies have shown that elderly adults exhibit increased and decreased activation on various cognitive tasks, yet little is known about age-related changes in inductive reasoning. To investigate the neural basis for the aging effect on inductive reasoning, 15 young and 15 elderly subjects performed numerical inductive reasoning while in a magnetic resonance (MR) scanner. Functional magnetic resonance imaging (fMRI) analysis revealed that numerical inductive reasoning, relative to rest, yielded multiple frontal, temporal, parietal, and some subcortical area activations for both age groups. In addition, the younger participants showed significant regions of task-induced deactivation, while no deactivation occurred in the elderly adults. Direct group comparisons showed that elderly adults exhibited greater activity in regions of task-related activation and areas showing task-induced deactivation (TID) in the younger group. Our findings suggest an age-related deficiency in neural function and resource allocation during inductive reasoning.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Eye movements reduce vividness and emotionality of "flashforwards".
Engelhard, Iris M; van den Hout, Marcel A; Janssen, Wilco C; van der Beek, Jorinde
2010-05-01
Earlier studies have shown that eye movements during retrieval of disturbing images about past events reduce their vividness and emotionality, which may be due to both tasks competing for working memory resources. This study examined whether eye movements reduce vividness and emotionality of visual distressing images about feared future events: "flashforwards". A non-clinical sample was asked to select two images of feared future events, which were self-rated for vividness and emotionality. These images were retrieved while making eye movements or without a concurrent secondary task, and then vividness and emotionality were rated again. Relative to the no-dual task condition, eye movements while thinking of future-oriented images resulted in decreased ratings of image vividness and emotional intensity. Apparently, eye movements reduce vividness and emotionality of visual images about past and future feared events. This is in line with a working memory account of the beneficial effects of eye movements, which predicts that any task that taxes working memory during retrieval of disturbing mental images will be beneficial. Copyright 2010 Elsevier Ltd. All rights reserved.
A generic FPGA-based detector readout and real-time image processing board
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant
2016-07-01
For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Deep learning of symmetrical discrepancies for computer-aided detection of mammographic masses
NASA Astrophysics Data System (ADS)
Kooi, Thijs; Karssemeijer, Nico
2017-03-01
When humans identify objects in images, context is an important cue; a cheetah is more likely to be a domestic cat when a television set is recognised in the background. Similar principles apply to the analysis of medical images. The detection of diseases that manifest unilaterally in symmetrical organs or organ pairs can in part be facilitated by a search for symmetrical discrepancies in or between the organs in question. During a mammographic exam, images are recorded of each breast and absence of a certain structure around the same location in the contralateral image will render the area under scrutiny more suspicious and conversely, the presence of similar tissue less so. In this paper, we present a fusion scheme for a deep Convolutional Neural Network (CNN) architecture with the goal to optimally capture such asymmetries. The method is applied to the domain of mammography CAD, but can be relevant to other medical image analysis tasks where symmetry is important such as lung, prostate or brain images.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
How acute total sleep loss affects the attending brain: a meta-analysis of neuroimaging studies.
Ma, Ning; Dinges, David F; Basner, Mathias; Rao, Hengyi
2015-02-01
Attention is a cognitive domain that can be severely affected by sleep deprivation. Previous neuroimaging studies have used different attention paradigms and reported both increased and reduced brain activation after sleep deprivation. However, due to large variability in sleep deprivation protocols, task paradigms, experimental designs, characteristics of subject populations, and imaging techniques, there is no consensus regarding the effects of sleep loss on the attending brain. The aim of this meta-analysis was to identify brain activations that are commonly altered by acute total sleep deprivation across different attention tasks. Coordinate-based meta-analysis of neuroimaging studies of performance on attention tasks during experimental sleep deprivation. The current version of the activation likelihood estimation (ALE) approach was used for meta-analysis. The authors searched published articles and identified 11 sleep deprivation neuroimaging studies using different attention tasks with a total of 185 participants, equaling 81 foci for ALE analysis. The meta-analysis revealed significantly reduced brain activation in multiple regions following sleep deprivation compared to rested wakefulness, including bilateral intraparietal sulcus, bilateral insula, right prefrontal cortex, medial frontal cortex, and right parahippocampal gyrus. Increased activation was found only in bilateral thalamus after sleep deprivation compared to rested wakefulness. Acute total sleep deprivation decreases brain activation in the fronto-parietal attention network (prefrontal cortex and intraparietal sulcus) and in the salience network (insula and medial frontal cortex). Increased thalamic activation after sleep deprivation may reflect a complex interaction between the de-arousing effects of sleep loss and the arousing effects of task performance on thalamic activity. © 2015 Associated Professional Sleep Societies, LLC.
Canopy, Erin; Evans, Matt; Boehler, Margaret; Roberts, Nicole; Sanfey, Hilary; Mellinger, John
2015-10-01
Endoscopic retrograde cholangiopancreatography is a challenging procedure performed by surgeons and gastroenterologists. We employed cognitive task analysis to identify steps and decision points for this procedure. Standardized interviews were conducted with expert gastroenterologists (7) and surgeons (4) from 4 institutions. A procedural step and cognitive decision point protocol was created from audio-taped transcriptions and was refined by 5 additional surgeons. Conceptual elements, sequential actions, and decision points were iterated for 5 tasks: patient preparation, duodenal intubation, selective cannulation, imaging interpretation with related therapeutic intervention, and complication management. A total of 180 steps were identified. Gastroenterologists identified 34 steps not identified by surgeons, and surgeons identified 20 steps not identified by gastroenterologists. The findings suggest that for complex procedures performed by diverse practitioners, more experts may help delineate distinctive emphases differentiated by training background and type of practice. Copyright © 2015 Elsevier Inc. All rights reserved.
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
The Edge Detectors Suitable for Retinal OCT Image Segmentation
Yang, Jing; Gao, Qian; Zhou, Sheng
2017-01-01
Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology. PMID:29065594
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka
2014-11-01
To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
[Functional magnetic resonance imaging of brain of college students with internet addiction].
DU, Wanping; Liu, Jun; Gao, Xunping; Li, Lingjiang; Li, Weihui; Li, Xin; Zhang, Yan; Zhou, Shunke
2011-08-01
To explore the functional locations of brain regions related to internet addiction (IA)with task-functional magnetic resonance imaging (fMRI). Nineteen college students who had internet game addition and 19 controls accepted the stimuli of videos via computer. The 3.0 Tesla MRI was used to record the Results of echo plannar imaging. The block design method was used. Intragroup and intergroup analysis Results in the 2 groups were obtained. The differences between the 2 groups were analyzed. The internet game videos markedly activated the brain regions of the college students who had or had no internet game addiction. Compared with the control group, the IA group showed increased activation in the right superior parietal lobule, right insular lobe, right precuneus, right cingulated gyrus, and right superior temporal gyrus. Internet game tasks can activate the vision, space, attention and execution center which are composed of temporal occipital gyrus and frontal parietal gyrus. Abnormal brain function and lateral activation of the right brain may exist in IA.
Drawing from Memory: Hand-Eye Coordination at Multiple Scales
Spivey, Michael J.
2013-01-01
Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well. PMID:23554894
Methods for comparing 3D surface attributes
NASA Astrophysics Data System (ADS)
Pang, Alex; Freeman, Adam
1996-03-01
A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
LittleQuickWarp: an ultrafast image warping tool.
Qu, Lei; Peng, Hanchuan
2015-02-01
Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Hyperspectral image analysis using artificial color
NASA Astrophysics Data System (ADS)
Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet
2010-03-01
By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.
Teaching Advanced Data Analysis Tools to High School Astronomy Students
NASA Astrophysics Data System (ADS)
Black, David V.; Herring, Julie; Hintz, Eric G.
2015-01-01
A major barrier to becoming an astronomer is learning how to analyze astronomical data, such as using photometry to compare the brightness of stars. Most fledgling astronomers learn observation, data reduction, and analysis skills through an upper division college class. If the same skills could be taught in an introductory high school astronomy class, then more students would have an opportunity to do authentic science earlier, with implications for how many choose to become astronomers. Several software tools have been developed that can analyze astronomical data ranging from fairly straightforward (AstroImageJ and DS9) to very complex (IRAF and DAOphot). During the summer of 2014, a study was undertaken at Brigham Young University through a Research Experience for Teachers (RET) program to evaluate the effectiveness and ease-of-use of these four software packages. Standard tasks tested included creating a false-color IR image using WISE data in DS9, Adobe Photoshop, and The Gimp; a multi-aperture analyses of variable stars over time using AstroImageJ; creating Spectral Energy Distributions (SEDs) of stars using photometry at multiple wavelengths in AstroImageJ and DS9; and color-magnitude and hydrogen alpha index diagrams for open star clusters using IRAF and DAOphot. Tutorials were then written and combined with screen captures to teach high school astronomy students at Walden School of Liberal Arts in Provo, UT how to perform these same tasks. They analyzed image data using the four software packages, imported it into Microsoft Excel, and created charts using images from BYU's 36-inch telescope at their West Mountain Observatory. The students' attempts to complete these tasks were observed, mentoring was provided, and the students then reported on their experience through a self-reflection essay and concept test. Results indicate that high school astronomy students can successfully complete professional-level astronomy data analyses when given detailed instruction tailored to their experience level along with proper support and mentoring.This project was funded by a grant from the National Science Foundation, Grant # PHY1157078.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology
Di Ruberto, Cecilia; Kocher, Michel
2018-01-01
Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images. PMID:29419781
Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology.
Loddo, Andrea; Di Ruberto, Cecilia; Kocher, Michel
2018-02-08
Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
Ang, Dan B; Angelopoulos, Christos; Katz, Jerald O
2006-11-01
The goals of this in vitro study were to determine the effect of signal fading of DenOptix photo-stimulable storage phosphor imaging plates scanned with a delay and to determine the effect on the diagnostic quality of the image. In addition, we sought to correlate signal fading with image spatial resolution and average pixel intensity values. Forty-eight images were obtained of a test specimen apparatus and scanned at 6 delayed time intervals: immediately scanned, 1 hour, 8 hours, 24 hours, 72 hours, and 168 hours. Six general dentists using Vixwin2000 software performed a measuring task to determine the location of an endodontic file tip and root apex. One-way ANOVA with repeated measures was used to determine the effect of signal fading (delayed scan time) on diagnostic image quality and average pixel intensity value. There was no statistically significant difference in diagnostic image quality resulting from signal fading. No difference was observed in spatial resolution of the images. There was a statistically significant difference in the pixel intensity analysis of an 8-step aluminum wedge between immediate scanning and 24-hour delayed scan time. There was an effect of delayed scanning on the average pixel intensity value. However, there was no effect on image quality and raters' ability to perform a clinical identification task. Proprietary software of the DenOptix digital imaging system demonstrates an excellent ability to process a delayed scan time signal and create an image of diagnostic quality.
Challenging the Image of the American Principalship.
ERIC Educational Resources Information Center
Langer, Sondra; Boris-Schacter, Sheryl
2003-01-01
Reports results of three year study of more than 200 principals about the tensions between role expectations and reality. Analysis of surveys and interviews finds most principals having to deal with three pairs of tensions: Between instructional leadership and management tasks, between personal and professional demands, and between the principal's…
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
Theoretical and experimental studies relevant to interpretation of auroral emissions
NASA Technical Reports Server (NTRS)
Keffer, Charles E.
1991-01-01
The accomplishments achieved over the past year are detailed with emphasis on the interpretation or auroral emissions and studies of potential spacecraft-induced contamination effects. Accordingly, the research was divided into two tasks. The first task is designed to add to the understanding of space vehicle induced external contamination. An experimental facility for simulation of the external environment for a spacecraft in low earth orbit was developed. The facility was used to make laboratory measurements of important phenomena required for improving the understanding of the space vehicle induced external environment and its effect on measurement of auroral emissions from space-based platforms. A workshop was sponsored to provide a forum for presentation of the latest research by nationally recognized experts on space vehicle contamination and to discuss the impact of this research on future missions involving space-based platforms. The second task is to add an ab initio auroral calculation to the extant ionospheric/thermospheric global modeling capabilities. Once the addition of the code was complete, the combined model was to be used to compare the relative intensities and behavior of various emission sources (dayglow, aurora, etc.). Such studies are essential to an understanding of the types of vacuum ultraviolet (VUV) auroral images which are expected to be available within two years with the successful deployment of the Ultraviolet Imager (UVI) on the ISTP POLAR spacecraft. In anticipation of this, the second task includes support for meetings of the science working group for the UVI to discuss operational and data analysis needs. Taken together, the proposed tasks outline a course of study designed to make significant contributions to the field of space-based auroral imaging.
Berns, G S; Song, A W; Mao, H
1999-07-15
Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.
2014-01-01
Background The processing of verbal fluency tasks relies on the coordinated activity of a number of brain areas, particularly in the frontal and temporal lobes of the left hemisphere. Recent studies using functional magnetic resonance imaging (fMRI) to study the neural networks subserving verbal fluency functions have yielded divergent results especially with respect to a parcellation of the inferior frontal gyrus for phonemic and semantic verbal fluency. We conducted a coordinate-based activation likelihood estimation (ALE) meta-analysis on brain activation during the processing of phonemic and semantic verbal fluency tasks involving 28 individual studies with 490 healthy volunteers. Results For phonemic as well as for semantic verbal fluency, the most prominent clusters of brain activation were found in the left inferior/middle frontal gyrus (LIFG/MIFG) and the anterior cingulate gyrus. BA 44 was only involved in the processing of phonemic verbal fluency tasks, BA 45 and 47 in the processing of phonemic and semantic fluency tasks. Conclusions Our comparison of brain activation during the execution of either phonemic or semantic verbal fluency tasks revealed evidence for spatially different activation in BA 44, but not other regions of the LIFG/LMFG (BA 9, 45, 47) during phonemic and semantic verbal fluency processing. PMID:24456150
Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping
2015-01-01
Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hemispheric dominance during the mental rotation task in patients with schizophrenia.
Chen, Jiu; Yang, Laiqi; Zhao, Jin; Li, Lanlan; Liu, Guangxiong; Ma, Wentao; Zhang, Yan; Wu, Xingqu; Deng, Zihe; Tuo, Ran
2012-04-01
Mental rotation is a spatial representation conversion capability using an imagined object and either object or self-rotation. This capability is impaired in schizophrenia. To provide a more detailed assessment of impaired cognitive functioning in schizophrenia by comparing the electrophysiological profiles of patients with schizophrenia and controls while completing a mental rotation task using both normally-oriented images and mirror images. This electroencephalographic study compared error rates, reaction times and the topographic map of event-related potentials in 32 participants with schizophrenia and 29 healthy controls during mental rotation tasks involving both normal images and mirror images. Among controls the mean error rate and the mean reaction time for normal images and mirror images were not significantly different but in the patient group the mean (sd) error rate was higher for mirror images than for normal images (42% [6%] vs. 32% [9%], t=2.64, p=0.031) and the mean reaction time was longer for mirror images than for normal images (587 [11] ms vs. 571 [18] ms, t=2.83, p=0.028). The amplitude of the P500 component at Pz (parietal area), Cz (central area), P3 (left parietal area) and P4 (right parietal area) were significantly lower in the patient group than in the control group for both normal images and mirror images. In both groups the P500 for both the normal and mirror images was significantly higher in the right parietal area (P4) compared with left parietal area (P3). The mental rotation abilities of patients with schizophrenia for both normally-oriented images and mirror images are impaired. Patients with schizophrenia show a diminished left cerebral contribution to the mental rotation task, a more rapid response time, and a differential response to normal images versus mirror images not seen in healthy controls. Specific topographic characteristics of the EEG during mental rotation tasks are potential biomarkers for schizophrenia.
Christakou, Anastasia; Halari, Rozmin; Smith, Anna B; Ifkovits, Eve; Brammer, Mick; Rubia, Katya
2009-10-15
Developmental functional imaging studies of cognitive control show progressive age-related increase in task-relevant fronto-striatal activation in male development from childhood to adulthood. Little is known, however, about how gender affects this functional development. In this study, we used event related functional magnetic resonance imaging to examine effects of sex, age, and their interaction on brain activation during attentional switching and interference inhibition, in 63 male and female adolescents and adults, aged 13 to 38. Linear age correlations were observed across all subjects in task-specific frontal, striatal and temporo-parietal activation. Gender analysis revealed increased activation in females relative to males in fronto-striatal areas during the Switch task, and laterality effects in the Simon task, with females showing increased left inferior prefrontal and temporal activation, and males showing increased right inferior prefrontal and parietal activation. Increased prefrontal activation clusters in females and increased parietal activation clusters in males furthermore overlapped with clusters that were age-correlated across the whole group, potentially reflecting more mature prefrontal brain activation patterns for females, and more mature parietal activation patterns for males. Gender by age interactions further supported this dissociation, revealing exclusive female-specific age correlations in inferior and medial prefrontal brain regions during both tasks, and exclusive male-specific age correlations in superior parietal (Switch task) and temporal regions (Simon task). These findings show increased recruitment of age-correlated prefrontal activation in females, and of age-correlated parietal activation in males, during tasks of cognitive control. Gender differences in frontal and parietal recruitment may thus be related to gender differences in the neurofunctional maturation of these brain regions.
Left inferior-parietal lobe activity in perspective tasks: identity statements
Arora, Aditi; Weiss, Benjamin; Schurz, Matthias; Aichhorn, Markus; Wieshofer, Rebecca C.; Perner, Josef
2015-01-01
We investigate the theory that the left inferior parietal lobe (IPL) is closely associated with tracking potential differences of perspective. Developmental studies find that perspective tasks are mastered at around 4 years of age. Our first study, meta-analyses of brain imaging studies shows that perspective tasks specifically activate a region in the left IPL and precuneus. These tasks include processing of false belief, visual perspective, and episodic memory. We test the location specificity theory in our second study with an unusual and novel kind of perspective task: identity statements. According to Frege's classical logical analysis, identity statements require appreciation of modes of presentation (perspectives). We show that identity statements, e.g., “the tour guide is also the driver” activate the left IPL in contrast to a control statements, “the tour guide has an apprentice.” This activation overlaps with the activations found in the meta-analysis. This finding is confirmed in a third study with different types of statements and different comparisons. All studies support the theory that the left IPL has as one of its overarching functions the tracking of perspective differences. We discuss how this function relates to the bottom-up attention function proposed for the bilateral IPL. PMID:26175677
A method for multitask fMRI data fusion applied to schizophrenia.
Calhoun, Vince D; Adali, Tulay; Kiehl, Kent A; Astur, Robert; Pekar, James J; Pearlson, Godfrey D
2006-07-01
It is becoming common to collect data from multiple functional magnetic resonance imaging (fMRI) paradigms on a single individual. The data from these experiments are typically analyzed separately and sometimes directly subtracted from one another on a voxel-by-voxel basis. These comparative approaches, although useful, do not directly attempt to examine potential commonalities between tasks and between voxels. To remedy this we propose a method to extract maximally spatially independent maps for each task that are "coupled" together by a shared loading parameter. We first compute an activation map for each task and each individual as "features," which are then used to perform joint independent component analysis (jICA) on the group data. We demonstrate our approach on a data set derived from healthy controls and schizophrenia patients, each of which carried out an auditory oddball task and a Sternberg working memory task. Our analysis approach revealed two interesting findings in the data that were missed with traditional analyses. First, consistent with our hypotheses, schizophrenia patients demonstrate "decreased" connectivity in a joint network including portions of regions implicated in two prevalent models of schizophrenia. A second finding is that for the voxels identified by the jICA analysis, the correlation between the two tasks was significantly higher in patients than in controls. This finding suggests that schizophrenia patients activate "more similarly" for both tasks than do controls. A possible synthesis of both findings is that patients are activating less, but also activating with a less-unique set of regions for these very different tasks. Both of the findings described support the claim that examination of joint activation across multiple tasks can enable new questions to be posed about fMRI data. Our approach can also be applied to data using more than two tasks. It thus provides a way to integrate and probe brain networks using a variety of tasks and may increase our understanding of coordinated brain networks and the impact of pathology upon them. 2005 Wiley-Liss, Inc.
How Acute Total Sleep Loss Affects the Attending Brain: A Meta-Analysis of Neuroimaging Studies
Ma, Ning; Dinges, David F.; Basner, Mathias; Rao, Hengyi
2015-01-01
Study Objectives: Attention is a cognitive domain that can be severely affected by sleep deprivation. Previous neuroimaging studies have used different attention paradigms and reported both increased and reduced brain activation after sleep deprivation. However, due to large variability in sleep deprivation protocols, task paradigms, experimental designs, characteristics of subject populations, and imaging techniques, there is no consensus regarding the effects of sleep loss on the attending brain. The aim of this meta-analysis was to identify brain activations that are commonly altered by acute total sleep deprivation across different attention tasks. Design: Coordinate-based meta-analysis of neuroimaging studies of performance on attention tasks during experimental sleep deprivation. Methods: The current version of the activation likelihood estimation (ALE) approach was used for meta-analysis. The authors searched published articles and identified 11 sleep deprivation neuroimaging studies using different attention tasks with a total of 185 participants, equaling 81 foci for ALE analysis. Results: The meta-analysis revealed significantly reduced brain activation in multiple regions following sleep deprivation compared to rested wakefulness, including bilateral intraparietal sulcus, bilateral insula, right prefrontal cortex, medial frontal cortex, and right parahippocampal gyrus. Increased activation was found only in bilateral thalamus after sleep deprivation compared to rested wakefulness. Conclusion: Acute total sleep deprivation decreases brain activation in the fronto-parietal attention network (prefrontal cortex and intraparietal sulcus) and in the salience network (insula and medial frontal cortex). Increased thalamic activation after sleep deprivation may reflect a complex interaction between the de-arousing effects of sleep loss and the arousing effects of task performance on thalamic activity. Citation: Ma N, Dinges DF, Basner M, Rao H. How acute total sleep loss affects the attending brain: a meta-analysis of neuroimaging studies. SLEEP 2015;38(2):233–240. PMID:25409102
Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization
NASA Astrophysics Data System (ADS)
Xu, J.; Sisniega, A.; Zbijewski, W.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-04-01
Detection of acute intracranial hemorrhage (ICH) is important for diagnosis and treatment of traumatic brain injury, stroke, postoperative bleeding, and other head and neck injuries. This paper details the design and development of a cone-beam CT (CBCT) system developed specifically for the detection of low-contrast ICH in a form suitable for application at the point of care. Recognizing such a low-contrast imaging task to be a major challenge in CBCT, the system design began with a rigorous analysis of task-based detectability including critical aspects of system geometry, hardware configuration, and artifact correction. The imaging performance model described the three-dimensional (3D) noise-equivalent quanta using a cascaded systems model that included the effects of scatter, scatter correction, hardware considerations of complementary metal-oxide semiconductor (CMOS) and flat-panel detectors (FPDs), and digitization bit depth. The performance was analyzed with respect to a low-contrast (40-80 HU), medium-frequency task representing acute ICH detection. The task-based detectability index was computed using a non-prewhitening observer model. The optimization was performed with respect to four major design considerations: (1) system geometry (including source-to-detector distance (SDD) and source-to-axis distance (SAD)); (2) factors related to the x-ray source (including focal spot size, kVp, dose, and tube power); (3) scatter correction and selection of an antiscatter grid; and (4) x-ray detector configuration (including pixel size, additive electronics noise, field of view (FOV), and frame rate, including both CMOS and a-Si:H FPDs). Optimal design choices were also considered with respect to practical constraints and available hardware components. The model was verified in comparison to measurements on a CBCT imaging bench as a function of the numerous design parameters mentioned above. An extended geometry (SAD = 750 mm, SDD = 1100 mm) was found to be advantageous in terms of patient dose (20 mGy) and scatter reduction, while a more isocentric configuration (SAD = 550 mm, SDD = 1000 mm) was found to give a more compact and mechanically favorable configuration with minor tradeoff in detectability. An x-ray source with a 0.6 mm focal spot size provided the best compromise between spatial resolution requirements and x-ray tube power. Use of a modest anti-scatter grid (8:1 GR) at a 20 mGy dose provided slight improvement (~5-10%) in the detectability index, but the benefit was lost at reduced dose. The potential advantages of CMOS detectors over FPDs were quantified, showing that both detectors provided sufficient spatial resolution for ICH detection, while the former provided a potentially superior low-dose performance, and the latter provided the requisite FOV for volumetric imaging in a centered-detector geometry. Task-based imaging performance modeling provides an important starting point for CBCT system design, especially for the challenging task of ICH detection, which is somewhat beyond the capabilities of existing CBCT platforms. The model identifies important tradeoffs in system geometry and hardware configuration, and it supports the development of a dedicated CBCT system for point-of-care application. A prototype suitable for clinical studies is in development based on this analysis.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Dynamic changes in functional cerebral connectivity of spatial cognition during the menstrual cycle.
Weis, Susanne; Hausmann, Markus; Stoffers, Barbara; Sturm, Walter
2011-10-01
Functional cerebral asymmetries (FCAs) in women have been shown to vary with changing levels of sex hormones during the menstrual cycle. Previous studies have suggested that interhemispheric interaction forms a key component in generating FCAs and it has been shown behaviorally and by functional imaging that interhemispheric interaction changes during the menstrual cycle, at least for a left hemisphere dominant task. We used functional MRI and an analysis of functional connectivity to examine whether changes in right hemisphere advantage for a figure comparison task as found in behavioral studies, are based on comparable mechanisms like those identified for the verbal task. Women were examined three times during the menstrual cycle, during the menstrual, follicular and luteal phases. The behavioral data confirmed the right hemisphere advantage for the figure comparison task as well as changes of the right hemisphere advantage during the menstrual cycle. Imaging data showed cycle phase-related changes in lateralized brain activation within the task-dominant hemisphere and changes in connectivity between nonhomotopic areas of both hemispheres, suggesting that changes in functional brain organization in women during the menstrual cycle are not only restricted to hormone-related changes of interhemispheric inhibition between homotopic areas, as has been proposed earlier, but might additionally apply to changes of neuronal processes within the hemispheres which seem to be modulated by heterotopic functional connectivity between hemispheres. Copyright © 2010 Wiley-Liss, Inc.
Tracking prominent points in image sequences
NASA Astrophysics Data System (ADS)
Hahn, Michael
1994-03-01
Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.
Brain activations during bimodal dual tasks depend on the nature and combination of component tasks
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2015-01-01
We used functional magnetic resonance imaging to investigate brain activations during nine different dual tasks in which the participants were required to simultaneously attend to concurrent streams of spoken syllables and written letters. They performed a phonological, spatial or “simple” (speaker-gender or font-shade) discrimination task within each modality. We expected to find activations associated specifically with dual tasking especially in the frontal and parietal cortices. However, no brain areas showed systematic dual task enhancements common for all dual tasks. Further analysis revealed that dual tasks including component tasks that were according to Baddeley's model “modality atypical,” that is, the auditory spatial task or the visual phonological task, were not associated with enhanced frontal activity. In contrast, for other dual tasks, activity specifically associated with dual tasking was found in the left or bilateral frontal cortices. Enhanced activation in parietal areas, however, appeared not to be specifically associated with dual tasking per se, but rather with intermodal attention switching. We also expected effects of dual tasking in left frontal supramodal phonological processing areas when both component tasks required phonological processing and in right parietal supramodal spatial processing areas when both tasks required spatial processing. However, no such effects were found during these dual tasks compared with their component tasks performed separately. Taken together, the current results indicate that activations during dual tasks depend in a complex manner on specific demands of component tasks. PMID:25767443
Two-dimensional systolic-array architecture for pixel-level vision tasks
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; de With, Peter H. N.
2010-05-01
This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.
Extracting intrinsic functional networks with feature-based group independent component analysis.
Calhoun, Vince D; Allen, Elena
2013-04-01
There is increasing use of functional imaging data to understand the macro-connectome of the human brain. Of particular interest is the structure and function of intrinsic networks (regions exhibiting temporally coherent activity both at rest and while a task is being performed), which account for a significant portion of the variance in functional MRI data. While networks are typically estimated based on the temporal similarity between regions (based on temporal correlation, clustering methods, or independent component analysis [ICA]), some recent work has suggested that these intrinsic networks can be extracted from the inter-subject covariation among highly distilled features, such as amplitude maps reflecting regions modulated by a task or even coordinates extracted from large meta analytic studies. In this paper our goal was to explicitly compare the networks obtained from a first-level ICA (ICA on the spatio-temporal functional magnetic resonance imaging (fMRI) data) to those from a second-level ICA (i.e., ICA on computed features rather than on the first-level fMRI data). Convergent results from simulations, task-fMRI data, and rest-fMRI data show that the second-level analysis is slightly noisier than the first-level analysis but yields strikingly similar patterns of intrinsic networks (spatial correlations as high as 0.85 for task data and 0.65 for rest data, well above the empirical null) and also preserves the relationship of these networks with other variables such as age (for example, default mode network regions tended to show decreased low frequency power for first-level analyses and decreased loading parameters for second-level analyses). In addition, the best-estimated second-level results are those which are the most strongly reflected in the input feature. In summary, the use of feature-based ICA appears to be a valid tool for extracting intrinsic networks. We believe it will become a useful and important approach in the study of the macro-connectome, particularly in the context of data fusion.
Ibinson, James W; Vogt, Keith M; Taylor, Kevin B; Dua, Shiv B; Becker, Christopher J; Loggia, Marco; Wasan, Ajay D
2015-12-01
The insula is uniquely located between the temporal and parietal cortices, making it anatomically well-positioned to act as an integrating center between the sensory and affective domains for the processing of painful stimulation. This can be studied through resting-state functional connectivity (fcMRI) imaging; however, the lack of a clear methodology for the analysis of fcMRI complicates the interpretation of these data during acute pain. Detected connectivity changes may reflect actual alterations in low-frequency synchronous neuronal activity related to pain, may be due to changes in global cerebral blood flow or the superimposed task-induced neuronal activity. The primary goal of this study was to investigate the effects of global signal regression (GSR) and task paradigm regression (TPR) on the changes in functional connectivity of the left (contralateral) insula in healthy subjects at rest and during acute painful electric nerve stimulation of the right hand. The use of GSR reduced the size and statistical significance of connectivity clusters and created negative correlation coefficients for some connectivity clusters. TPR with cyclic stimulation gave task versus rest connectivity differences similar to those with a constant task, suggesting that analysis which includes TPR is more accurately reflective of low-frequency neuronal activity. Both GSR and TPR have been inconsistently applied to fcMRI analysis. Based on these results, investigators need to consider the impact GSR and TPR have on connectivity during task performance when attempting to synthesize the literature.
Jamadar, Sharna D; Egan, Gary F; Calhoun, Vince D; Johnson, Beth; Fielding, Joanne
2016-07-01
Intrinsic brain activity provides the functional framework for the brain's full repertoire of behavioral responses; that is, a common mechanism underlies intrinsic and extrinsic neural activity, with extrinsic activity building upon the underlying baseline intrinsic activity. The generation of a motor movement in response to sensory stimulation is one of the most fundamental functions of the central nervous system. Since saccadic eye movements are among our most stereotyped motor responses, we hypothesized that individual variability in the ability to inhibit a prepotent saccade and make a voluntary antisaccade would be related to individual variability in intrinsic connectivity. Twenty-three individuals completed the antisaccade task and resting-state functional magnetic resonance imaging (fMRI). A multivariate analysis of covariance identified relationships between fMRI oscillations (0.01-0.2 Hz) of resting-state networks determined using high-dimensional independent component analysis and antisaccade performance (latency, error rate). Significant multivariate relationships between antisaccade latency and directional error rate were obtained in independent components across the entire brain. Some of the relationships were obtained in components that overlapped substantially with the task; however, many were obtained in components that showed little overlap with the task. The current results demonstrate that even in the absence of a task, spectral power in regions showing little overlap with task activity predicts an individual's performance on a saccade task.
Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns
Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang
2014-01-01
Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723
Stable image acquisition for mobile image processing applications
NASA Astrophysics Data System (ADS)
Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker
2015-02-01
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images
Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E.; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J.
2016-01-01
Purpose Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. Methods We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. Results In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%–74%) and 87% (95% CI, 86%–88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91–0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. Conclusions This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. Translational Relevance The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver timely, accurate, and cost-effective image analysis. PMID:27668130
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images.
Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J
2016-09-01
Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%-74%) and 87% (95% CI, 86%-88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91-0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver timely, accurate, and cost-effective image analysis.
Physics-based deformable organisms for medical image analysis
NASA Astrophysics Data System (ADS)
Hamarneh, Ghassan; McIntosh, Chris
2005-04-01
Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.
Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool
NASA Astrophysics Data System (ADS)
Devine, Jeremy; Sahgal, Arjun; Karam, Irene; Martel, Anne L.
2016-03-01
The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.
Analysis of Interactive Graphics Display Equipment for an Automated Photo Interpretation System.
1982-06-01
System provides the hardware and software for a range of graphics processor tasks. The IMAGE System employs the RSX- II M real - time operating . system in...One hard copy unit serves up to four work stations. The executive program of the IMAGE system is the DEC RSX- 11 M real - time operating system . In...picture controller. The PDP 11/34 executes programs concurrently under the RSX- I IM real - time operating system . Each graphics program consists of a
Analysis of Brown camera distortion model
NASA Astrophysics Data System (ADS)
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images.
Levenson, Richard M; Krupinski, Elizabeth A; Navarro, Victor M; Wasserman, Edward A
2015-01-01
Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)-which share many visual system properties with humans-can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds' histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task-namely, classification of suspicious mammographic densities (masses)-the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds' successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.
Yang, Jie; Andric, Michael; Mathew, Mili M
2015-10-01
Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Inter-Association Task Force Report on Image.
ERIC Educational Resources Information Center
Special Libraries Association, Washington, DC.
In 1988, the Board of Directors of the Special Libraries Association provided funding to a task force to gather data which would determine how certain segments of society perceive librarians, how librarians view themselves and their colleagues, and to provide recommendations for addressing the issue of image. The task force project consisted of…
Road marking features extraction using the VIAPIX® system
NASA Astrophysics Data System (ADS)
Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.
2016-07-01
Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.
Krans, Julie; Langner, Oliver; Reinecke, Andrea; Pearson, David G
2013-12-01
The present study addressed the role of context information and dual-task interference during the encoding of negative pictures on intrusion development and voluntary recall. Healthy participants were shown negative pictures with or without context information. Pictures were either viewed alone or concurrently with a visuospatial or verbal task. Participants reported their intrusive images of the pictures in a diary. At follow-up, perceptual and contextual memory was tested. Participants in the context group reported more intrusive images and perceptual voluntary memory than participants in the no context group. No effects of the concurrent tasks were found on intrusive image frequency, but perceptual and contextual memory was affected according to the cognitive load of the task. The analogue method cannot be generalized to real-life trauma and the secondary tasks may differ in cognitive load. The findings challenge a dual memory model of PTSD but support an account in which retrieval strategy, rather than encoding processes, accounts for the experience of involuntary versus voluntary recall. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J; Imbergamo, P
The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
Generalized whole-body Patlak parametric imaging for enhanced quantification in clinical PET.
Karakatsanis, Nicolas A; Zhou, Yun; Lodge, Martin A; Casey, Michael E; Wahl, Richard L; Zaidi, Habib; Rahmim, Arman
2015-11-21
We recently developed a dynamic multi-bed PET data acquisition framework to translate the quantitative benefits of Patlak voxel-wise analysis to the domain of routine clinical whole-body (WB) imaging. The standard Patlak (sPatlak) linear graphical analysis assumes irreversible PET tracer uptake, ignoring the effect of FDG dephosphorylation, which has been suggested by a number of PET studies. In this work: (i) a non-linear generalized Patlak (gPatlak) model is utilized, including a net efflux rate constant kloss, and (ii) a hybrid (s/g)Patlak (hPatlak) imaging technique is introduced to enhance contrast to noise ratios (CNRs) of uptake rate Ki images. Representative set of kinetic parameter values and the XCAT phantom were employed to generate realistic 4D simulation PET data, and the proposed methods were additionally evaluated on 11 WB dynamic PET patient studies. Quantitative analysis on the simulated Ki images over 2 groups of regions-of-interest (ROIs), with low (ROI A) or high (ROI B) true kloss relative to Ki, suggested superior accuracy for gPatlak. Bias of sPatlak was found to be 16-18% and 20-40% poorer than gPatlak for ROIs A and B, respectively. By contrast, gPatlak exhibited, on average, 10% higher noise than sPatlak. Meanwhile, the bias and noise levels for hPatlak always ranged between the other two methods. In general, hPatlak was seen to outperform all methods in terms of target-to-background ratio (TBR) and CNR for all ROIs. Validation on patient datasets demonstrated clinical feasibility for all Patlak methods, while TBR and CNR evaluations confirmed our simulation findings, and suggested presence of non-negligible kloss reversibility in clinical data. As such, we recommend gPatlak for highly quantitative imaging tasks, while, for tasks emphasizing lesion detectability (e.g. TBR, CNR) over quantification, or for high levels of noise, hPatlak is instead preferred. Finally, gPatlak and hPatlak CNR was systematically higher compared to routine SUV values.
Generalized whole-body Patlak parametric imaging for enhanced quantification in clinical PET
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Zhou, Yun; Lodge, Martin A.; Casey, Michael E.; Wahl, Richard L.; Zaidi, Habib; Rahmim, Arman
2015-11-01
We recently developed a dynamic multi-bed PET data acquisition framework to translate the quantitative benefits of Patlak voxel-wise analysis to the domain of routine clinical whole-body (WB) imaging. The standard Patlak (sPatlak) linear graphical analysis assumes irreversible PET tracer uptake, ignoring the effect of FDG dephosphorylation, which has been suggested by a number of PET studies. In this work: (i) a non-linear generalized Patlak (gPatlak) model is utilized, including a net efflux rate constant kloss, and (ii) a hybrid (s/g)Patlak (hPatlak) imaging technique is introduced to enhance contrast to noise ratios (CNRs) of uptake rate Ki images. Representative set of kinetic parameter values and the XCAT phantom were employed to generate realistic 4D simulation PET data, and the proposed methods were additionally evaluated on 11 WB dynamic PET patient studies. Quantitative analysis on the simulated Ki images over 2 groups of regions-of-interest (ROIs), with low (ROI A) or high (ROI B) true kloss relative to Ki, suggested superior accuracy for gPatlak. Bias of sPatlak was found to be 16-18% and 20-40% poorer than gPatlak for ROIs A and B, respectively. By contrast, gPatlak exhibited, on average, 10% higher noise than sPatlak. Meanwhile, the bias and noise levels for hPatlak always ranged between the other two methods. In general, hPatlak was seen to outperform all methods in terms of target-to-background ratio (TBR) and CNR for all ROIs. Validation on patient datasets demonstrated clinical feasibility for all Patlak methods, while TBR and CNR evaluations confirmed our simulation findings, and suggested presence of non-negligible kloss reversibility in clinical data. As such, we recommend gPatlak for highly quantitative imaging tasks, while, for tasks emphasizing lesion detectability (e.g. TBR, CNR) over quantification, or for high levels of noise, hPatlak is instead preferred. Finally, gPatlak and hPatlak CNR was systematically higher compared to routine SUV values.
Analysis of straw row in the image to control the trajectory of the agricultural combine harvester
NASA Astrophysics Data System (ADS)
Shkanaev, Aleksandr Yurievich; Polevoy, Dmitry Valerevich; Panchenko, Aleksei Vladimirovich; Krokhina, Darya Alekseevna; Nailevish, Sadekov Rinat
2018-04-01
The paper proposes a solution to the automatic operation of the combine harvester along the straw rows by means of the images from the camera, installed in the cab of the harvester. The U-Net is used to recognize straw rows in the image. The edges of the row are approximated in the segmented image by the curved lines and further converted into the harvester coordinate system for the automatic operating system. The "new" network architecture and approaches to the row approximation has improved the quality of the recognition task and the processing speed of the frames up to 96% and 7.5 fps, respectively. Keywords: Grain harvester,
Quality Control by Artificial Vision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.
2010-01-01
Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan, E-mail: samei@duke.edu; Richard, Samuel
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD,more » Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.« less
van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K
2014-07-01
Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.
Ganalyzer: A Tool for Automatic Galaxy Image Analysis
NASA Astrophysics Data System (ADS)
Shamir, Lior
2011-08-01
We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.
Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Heute, U; Deuschl, G; Raethjen, J; Muthuraman, Muthuraman
2016-09-01
Recently, interest has been growing to understand the underlying dynamic directional relationship between simultaneously activated regions of the brain during motor task performance. Such directionality analysis (or effective connectivity analysis), based on non-invasive electrophysiological (electroencephalography-EEG) and hemodynamic (functional near infrared spectroscopy-fNIRS; and functional magnetic resonance imaging-fMRI) neuroimaging modalities can provide an estimate of the motor task-related information flow from one brain region to another. Since EEG, fNIRS and fMRI modalities achieve different spatial and temporal resolutions of motor-task related activation in the brain, the aim of this study was to determine the effective connectivity of cortico-cortical sensorimotor networks during finger movement tasks measured by each neuroimaging modality. Nine healthy subjects performed right hand finger movement tasks of different complexity (simple finger tapping-FT, simple finger sequence-SFS, and complex finger sequence-CFS). We focused our observations on three cortical regions of interest (ROIs), namely the contralateral sensorimotor cortex (SMC), the contralateral premotor cortex (PMC) and the contralateral dorsolateral prefrontal cortex (DLPFC). We estimated the effective connectivity between these ROIs using conditional Granger causality (GC) analysis determined from the time series signals measured by fMRI (blood oxygenation level-dependent-BOLD), fNIRS (oxygenated-O2Hb and deoxygenated-HHb hemoglobin), and EEG (scalp and source level analysis) neuroimaging modalities. The effective connectivity analysis showed significant bi-directional information flow between the SMC, PMC, and DLPFC as determined by the EEG (scalp and source), fMRI (BOLD) and fNIRS (O2Hb and HHb) modalities for all three motor tasks. However the source level EEG GC values were significantly greater than the other modalities. In addition, only the source level EEG showed a significantly greater forward than backward information flow between the ROIs. This simultaneous fMRI, fNIRS and EEG study has shown through independent GC analysis of the respective time series that a bi-directional effective connectivity occurs within a cortico-cortical sensorimotor network (SMC, PMC and DLPFC) during finger movement tasks.
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
Interaction techniques for radiology workstations: impact on users' productivity
NASA Astrophysics Data System (ADS)
Moise, Adrian; Atkins, M. Stella
2004-04-01
As radiologists progress from reading images presented on film to modern computer systems with images presented on high-resolution displays, many new problems arise. Although the digital medium has many advantages, the radiologist"s job becomes cluttered with many new tasks related to image manipulation. This paper presents our solution for supporting radiologists" interpretation of digital images by automating image presentation during sequential interpretation steps. Our method supports scenario based interpretation, which group data temporally, according to the mental paradigm of the physician. We extended current hanging protocols with support for "stages". A stage reflects the presentation of digital information required to complete a single step within a complex task. We demonstrated the benefits of staging in a user study with 20 lay subjects involved in a visual conjunctive search for targets, similar to a radiology task of identifying anatomical abnormalities. We designed a task and a set of stimuli which allowed us to simulate the interpretation workflow from a typical radiology scenario - reading a chest computed radiography exam when a prior study is also available. The simulation was possible by abstracting the radiologist"s task and the basic workstation navigation functionality. We introduced "Stages," an interaction technique attuned to the radiologist"s interpretation task. Compared to the traditional user interface, Stages generated a 14% reduction in the average interpretation.
Supervised detection of exoplanets in high-contrast imaging sequences
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.
2018-06-01
Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve the demographics of directly imaged exoplanets.
Neural Substrates for Processing Task-Irrelevant Sad Images in Adolescents
ERIC Educational Resources Information Center
Wang, Lihong; Huettel, Scott; De Bellis, Michael D.
2008-01-01
Neural systems related to cognitive and emotional processing were examined in adolescents using event-related functional magnetic resonance imaging (fMRI). Ten healthy adolescents performed an emotional oddball task. Subjects detected infrequent circles (targets) within a continual stream of phase-scrambled images (standards). Sad and neutral…
The observation and coverage analysis of the moon-based ultraviolet telescope on CE-3 lander
NASA Astrophysics Data System (ADS)
wang, f.; wen, w.-b.; liu, d.-w.; geng, l.; zhang, x.-x.; zhao, s.
2017-09-01
Through the analysis of all the observed images of MUVT, it is found that in the celestial coordinate system, all the images of the survey are concentrated at Latitude 65 degrees and Longtitude -90 degrees as the center, a ring of 15 degrees width. The observation data analysis: the coverage of the northern area is up to 2263.8 square degrees, accounting for about 5.487% of the all area. The task is completed the observation target. For the first time, the MUVT in a long time has carried out the astronomical observations, and accumulated abundant observational data for basic research on the evolution of stars, compact star and high energy astrophysics and so on.
NASA Technical Reports Server (NTRS)
Natesh, R.; Smith, J. M.; Bruce, T.; Oidwai, H. A.
1980-01-01
One hundred and seventy four silicon sheet samples were analyzed for twin boundary density, dislocation pit density, and grain boundary length. Procedures were developed for the quantitative analysis of the twin boundary and dislocation pit densities using a QTM-720 Quantitative Image Analyzing system. The QTM-720 system was upgraded with the addition of a PDP 11/03 mini-computer with dual floppy disc drive, a digital equipment writer high speed printer, and a field-image feature interface module. Three versions of a computer program that controls the data acquisition and analysis on the QTM-720 were written. Procedures for the chemical polishing and etching were also developed.
The power of Kawaii: viewing cute images promotes a careful behavior and narrows attentional focus.
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning "cute") things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE=43.9 ± 10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9 ± 5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7 ± 2.2% improvement) than after viewing less cute images (1.4 ± 2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2 ± 2.1%). In the third experiment, participants performed a global-local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work.
The Power of Kawaii: Viewing Cute Images Promotes a Careful Behavior and Narrows Attentional Focus
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning “cute”) things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE = 43.9±10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9±5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7±2.2% improvement) than after viewing less cute images (1.4±2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2±2.1%). In the third experiment, participants performed a global–local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work. PMID:23050022
Temporally flexible feedback signal to foveal cortex for peripheral object recognition
Fan, Xiaoxu; Wang, Lan; Shao, Hanyu; Kersten, Daniel; He, Sheng
2016-01-01
Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object–noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects’ spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery. PMID:27671651
Volumetric image interpretation in radiology: scroll behavior and cognitive processes.
den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk
2018-05-16
The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.
Shapelet analysis of pupil dilation for modeling visuo-cognitive behavior in screening mammography
NASA Astrophysics Data System (ADS)
Alamudun, Folami; Yoon, Hong-Jun; Hammond, Tracy; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia
2016-03-01
Our objective is to improve understanding of visuo-cognitive behavior in screening mammography under clinically equivalent experimental conditions. To this end, we examined pupillometric data, acquired using a head-mounted eye-tracking device, from 10 image readers (three breast-imaging radiologists and seven Radiology residents), and their corresponding diagnostic decisions for 100 screening mammograms. The corpus of mammograms comprised cases of varied pathology and breast parenchymal density. We investigated the relationship between pupillometric fluctuations, experienced by an image reader during mammographic screening, indicative of changes in mental workload, the pathological characteristics of a mammographic case, and the image readers' diagnostic decision and overall task performance. To answer these questions, we extract features from pupillometric data, and additionally applied time series shapelet analysis to extract discriminative patterns in changes in pupil dilation. Our results show that pupillometric measures are adequate predictors of mammographic case pathology, and image readers' diagnostic decision and performance with an average accuracy of 80%.
Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y
2002-09-01
Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.
Touroutoglou, Alexandra; Bickart, Kevin C; Barrett, Lisa Feldman; Dickerson, Bradford C
2014-10-01
Individual differences in the intensity of feelings of arousal while viewing emotional pictures have been associated with the magnitude of task-evoked blood-oxygen dependent (BOLD) response in the amygdala. Recently, we reported that individual differences in feelings of arousal are associated with task-free (resting state) connectivity within the salience network. There has not yet been an investigation of whether these two types of functional magnetic resonance imaging (MRI) measures are redundant or independent in their relationships to behavior. Here we tested the hypothesis that a combination of task-evoked amygdala activation and task-free amygdala connectivity within the salience network relate to individual differences in feelings of arousal while viewing of negatively potent images. In 25 young adults, results revealed that greater task-evoked amygdala activation and stronger task-free amygdala connectivity within the salience network each contributed independently to feelings of arousal, predicting a total of 45% of its variance. Individuals who had both increased task-evoked amygdala activation and stronger task-free amygdala connectivity within the salience network had the most heightened levels of arousal. Task-evoked amygdala activation and task-free amygdala connectivity within the salience network were not related to each other, suggesting that resting-state and task-evoked dynamic brain imaging measures may provide independent and complementary information about affective experience, and likely other kinds of behaviors as well. Copyright © 2014 Wiley Periodicals, Inc.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
NASA Astrophysics Data System (ADS)
Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.
2014-03-01
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
NASA Technical Reports Server (NTRS)
Knasel, T. Michael
1996-01-01
The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.
Grace, Sally A; Rossell, Susan L; Heinrichs, Markus; Kordsachia, Catarina; Labuschagne, Izelle
2018-05-24
Oxytocin (OXT) is a neuropeptide which has a critical role in human social behaviour and cognition. Research investigating the role of OXT on functional brain changes in humans has often used task paradigms that probe socioemotional processes. Preliminary evidence suggests a central role of the amygdala in the social cognitive effects of intranasal OXT (IN-OXT), however, inconsistencies in task-design and analysis methods have led to inconclusive findings regarding a cohesive model of the neural mechanisms underlying OXT's actions. The aim of this meta-analysis was to systematically investigate these findings. A systematic search of PubMed, PsycINFO, and Scopus databases was conducted for fMRI studies which compared IN-OXT to placebo in humans. First, we systematically reviewed functional magnetic resonance imaging (fMRI) studies of IN-OXT, including studies of healthy humans, those with clinical disorders, and studies examining resting-state fMRI (rsfMRI). Second, we employed a coordinate-based meta-analysis for task-based neuroimaging literature using activation likelihood estimation (ALE), whereby, coordinates were extracted from clusters with significant differences in IN-OXT versus placebo in healthy adults. Data were included for 39 fMRI studies that reported a total of 374 distinct foci. The meta-analysis identified task-related IN-OXT increases in activity within a cluster of the left superior temporal gyrus during tasks of emotion processing. These findings are important as they implicate regions beyond the amygdala in the neural effects of IN-OXT. The outcomes from this meta-analysis can guide a priori predictions for future OXT research, and provide an avenue for targeted treatment interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.
WND-CHARM: Multi-purpose image classification using compound image transforms
Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.
2008-01-01
We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
N-back Working Memory Task: Meta-analysis of Normative fMRI Studies With Children.
Yaple, Zachary; Arsalidou, Marie
2018-05-07
The n-back task is likely the most popular measure of working memory for functional magnetic resonance imaging (fMRI) studies. Despite accumulating neuroimaging studies with the n-back task and children, its neural representation is still unclear. fMRI studies that used the n-back were compiled, and data from children up to 15 years (n = 260) were analyzed using activation likelihood estimation. Results show concordance in frontoparietal regions recognized for their role in working memory as well as regions not typically highlighted as part of the working memory network, such as the insula. Findings are discussed in terms of developmental methodology and potential contribution to developmental theories of cognition. © 2018 Society for Research in Child Development.
Task-based optimization of image reconstruction in breast CT
NASA Astrophysics Data System (ADS)
Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2014-03-01
We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.
Lee, Matthew H; Schemmel, Andrew J; Pooler, B Dustin; Hanley, Taylor; Kennedy, Tabassum; Field, Aaron; Wiegmann, Douglas; Yu, John-Paul J
2017-04-01
The study aimed to assess perceptions of reading room workflow and the impact separating image-interpretive and nonimage-interpretive task workflows can have on radiologist perceptions of workplace disruptions, workload, and overall satisfaction. A 14-question survey instrument was developed to measure radiologist perceptions of workplace interruptions, satisfaction, and workload prior to and following implementation of separate image-interpretive and nonimage-interpretive reading room workflows. The results were collected over 2 weeks preceding the intervention and 2 weeks following the end of the intervention. The results were anonymized and analyzed using univariate analysis. A total of 18 people responded to the preintervention survey: 6 neuroradiology fellows and 12 attending neuroradiologists. Fifteen people who were then present for the 1-month intervention period responded to the postintervention survey. Perceptions of workplace disruptions, image interpretation, quality of trainee education, ability to perform nonimage-interpretive tasks, and quality of consultations (P < 0.0001) all improved following the intervention. Mental effort and workload also improved across all assessment domains, as did satisfaction with quality of image interpretation and consultative work. Implementation of parallel dedicated image-interpretive and nonimage-interpretive workflows may improve markers of radiologist perceptions of workplace satisfaction. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Automatic MRI 2D brain segmentation using graph searching technique.
Pedoia, Valentina; Binaghi, Elisabetta
2013-09-01
Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.
Comparison of parameter-adapted segmentation methods for fluorescence micrographs.
Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas
2011-11-01
Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis
ERIC Educational Resources Information Center
Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying
2012-01-01
This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…
Effects of Cognitive Styles on 2D Drafting and Design Performance in Digital Media
ERIC Educational Resources Information Center
Pektas, Sule Tasli
2010-01-01
This paper investigates the interactions between design students' cognitive styles, as measured by Riding's Cognitive Styles Analysis, and performance in 2D drafting and design tasks in digital media. An empirical research revealed that Imager students outperformed Verbalisers in both drafting and creativity scores. Wholist-Analytic cognitive…
Culture in English as a Foreign Language (EFL) Textbooks: A Semiotic Approach
ERIC Educational Resources Information Center
Weninger, Csilla; Kiss, Tamas
2013-01-01
This article problematizes current, quantitative approaches to the analysis of culture in foreign language textbooks as objectifying culture, and offers an alternative, semiotic framework that examines texts, images, and tasks as merely engendering particular meanings in the act of semiosis. The authors take as a point of departure developments…
Functional Evaluation of Hidden Figures Object Analysis in Children with Autistic Disorder
ERIC Educational Resources Information Center
Malisza, Krisztina L.; Clancy, Christine; Shiloff, Deborah; Foreman, Derek; Holden, Jeanette; Jones, Cheryl; Paulson, K.; Summers, Randy; Yu, C. T.; Chudley, Albert E.
2011-01-01
Functional magnetic resonance imaging (fMRI) during performance of a hidden figures task (HFT) was used to compare differences in brain function in children diagnosed with autism disorder (AD) compared to children with attention-deficit/hyperactivity disorder (ADHD) and typical controls (TC). Overall greater functional MRI activity was observed in…
NASA Technical Reports Server (NTRS)
Imhoff, M. L.; Vermillion, C. H.; Khan, F. A.
1984-01-01
An investigation to examine the utility of spaceborne radar image data to malaria vector control programs is described. Specific tasks involve an analysis of radar illumination geometry vs information content, the synergy of radar and multispectral data mergers, and automated information extraction techniques.
de Andrade, Anarella Penha Meirelles; Amaro, Edson; Farhat, Sylvia Costa Lima; Schvartsman, Claudio
2016-06-01
Burnout syndrome is common in healthcare workers. We evaluated its prevalence in paediatric residents and investigated its influence on cerebral function correlations, using functional magnetic resonance imaging (MRI), when they carried out an attentional paradigm. This cross-sectional descriptive study involved 28 residents from the Department of Paediatrics at the University of São Paulo. The functional MRI was carried out while the residents completed the Stroop colour word task paradigm to investigate their attentional task performance. The Maslach Burnout Inventory (MBI) was applied, and stress was assessed using the Lipp Inventory of Stress Symptoms for Adults and by a visual analogue mood scale. The MBI subscales of depersonalisation and emotional exhaustion indicated that 53.1% of the residents had moderate or high burnout syndrome. The whole-brain multivariate analysis showed positive correlations between the blood oxygenation level dependent effect and the MBI depersonalisation and emotional exhaustion indices in the dorsolateral prefrontal cortex, which controls for anxiety. Increased brain activation during an attention task, measured using functional MRI, was associated with higher burnout scores in paediatric residents. This study provides a biological basis for the implementation of measures to reduce burnout syndrome at the start of residency training programmes. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Hobeika, Lucie; Diard-Detoeuf, Capucine; Garcin, Béatrice; Levy, Richard; Volle, Emmanuelle
2016-05-01
Reasoning by analogy allows us to link distinct domains of knowledge and to transfer solutions from one domain to another. Analogical reasoning has been studied using various tasks that have generally required the consideration of the relationships between objects and their integration to infer an analogy schema. However, these tasks varied in terms of the level and the nature of the relationships to consider (e.g., semantic, visuospatial). The aim of this study was to identify the cerebral network involved in analogical reasoning and its specialization based on the domains of information and task specificity. We conducted a coordinate-based meta-analysis of 27 experiments that used analogical reasoning tasks. The left rostrolateral prefrontal cortex was one of the regions most consistently activated across the studies. A comparison between semantic and visuospatial analogy tasks showed both domain-oriented regions in the inferior and middle frontal gyri and a domain-general region, the left rostrolateral prefrontal cortex, which was specialized for analogy tasks. A comparison of visuospatial analogy to matrix problem tasks revealed that these two relational reasoning tasks engage, at least in part, distinct right and left cerebral networks, particularly separate areas within the left rostrolateral prefrontal cortex. These findings highlight several cognitive and cerebral differences between relational reasoning tasks that can allow us to make predictions about the respective roles of distinct brain regions or networks. These results also provide new, testable anatomical hypotheses about reasoning disorders that are induced by brain damage. Hum Brain Mapp 37:1953-1969, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Multi-layer imager design for mega-voltage spectral imaging
NASA Astrophysics Data System (ADS)
Myronakis, Marios; Hu, Yue-Houng; Fueglistaller, Rony; Wang, Adam; Baturin, Paul; Huber, Pascal; Morf, Daniel; Star-Lack, Josh; Berbeco, Ross
2018-05-01
The architecture of multi-layer imagers (MLIs) can be exploited to provide megavoltage spectral imaging (MVSPI) for specific imaging tasks. In the current work, we investigated bone suppression and gold fiducial contrast enhancement as two clinical tasks which could be improved with spectral imaging. A method based on analytical calculations that enables rapid investigation of MLI component materials and thicknesses was developed and validated against Monte Carlo computations. The figure of merit for task-specific imaging performance was the contrast-to-noise ratio (CNR) of the gold fiducial when the CNR of bone was equal to zero after a weighted subtraction of the signals obtained from each MLI layer. Results demonstrated a sharp increase in the CNR of gold when the build-up component or scintillation materials and thicknesses were modified. The potential for low-cost, prompt implementation of specific modifications (e.g. composition of the build-up component) could accelerate clinical translation of MVSPI.
Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha
2018-02-01
In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.
Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.
Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L
2018-02-01
Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.
NASA Astrophysics Data System (ADS)
El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno
2015-10-01
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.
GPFA-AB_Phase1RiskAnalysisTask5DataUpload
Teresa E. Jordan
2015-09-30
This submission contains information used to compute the risk factors for the GPFA-AB project (DE-EE0006726). The risk factors are natural reservoir quality, thermal resource quality, potential for induced seismicity, and utilization. The methods used to combine the risk factors included taking the product, sum, and minimum of the four risk factors. The files are divided into images, rasters, shapefiles, and supporting information. The image files show what the raster and shapefiles should look like. The raster files contain the input risk factors, calculation of the scaled risk factors, and calculation of the combined risk factors. The shapefiles include definition of the fairways, definition of the US Census Places, the center of the raster cells, and locations of industries. Supporting information contains details of the calculations or processing used in generating the files. An image of the raster will have the same name except *.png as the file ending instead of *.tif. Images with “fairways” or “industries” added to the name are composed of a raster with the relevant shapefile added. The file About_GPFA-AB_Phase1RiskAnalysisTask5DataUpload.pdf contains information the citation, special use considerations, authorship, etc. More details on each file are given in the spreadsheet “list_of_contents.csv” in the folder “SupportingInfo”. Code used to calculate values is available at https://github.com/calvinwhealton/geothermal_pfa under the folder “combining_metrics”.
Automated identification of the lung contours in positron emission tomography
NASA Astrophysics Data System (ADS)
Nery, F.; Silvestre Silva, J.; Ferreira, N. C.; Caramelo, F. J.; Faustino, R.
2013-03-01
Positron Emission Tomography (PET) is a nuclear medicine imaging technique that permits to analyze, in three dimensions, the physiological processes in vivo. One of the areas where PET has demonstrated its advantages is in the staging of lung cancer, where it offers better sensitivity and specificity than other techniques such as CT. On the other hand, accurate segmentation, an important procedure for Computer Aided Diagnostics (CAD) and automated image analysis, is a challenging task given the low spatial resolution and the high noise that are intrinsic characteristics of PET images. This work presents an algorithm for the segmentation of lungs in PET images, to be used in CAD and group analysis in a large patient database. The lung boundaries are automatically extracted from a PET volume through the application of a marker-driven watershed segmentation procedure which is robust to the noise. In order to test the effectiveness of the proposed method, we compared the segmentation results in several slices using our approach with the results obtained from manual delineation. The manual delineation was performed by nuclear medicine physicians that used a software routine that we developed specifically for this task. To quantify the similarity between the contours obtained from the two methods, we used figures of merit based on region and also on contour definitions. Results show that the performance of the algorithm was similar to the performance of human physicians. Additionally, we found that the algorithm-physician agreement is similar (statistically significant) to the inter-physician agreement.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Li, Hui-Jie; Hou, Xiao-Hui; Liu, Han-Hui; Yue, Chun-Lin; He, Yong; Zuo, Xi-Nian
2015-03-01
Most of the previous task functional magnetic resonance imaging (fMRI) studies found abnormalities in distributed brain regions in mild cognitive impairment (MCI) and Alzheimer's disease (AD), and few studies investigated the brain network dysfunction from the system level. In this meta-analysis, we aimed to examine brain network dysfunction in MCI and AD. We systematically searched task-based fMRI studies in MCI and AD published between January 1990 and January 2014. Activation likelihood estimation meta-analyses were conducted to compare the significant group differences in brain activation, the significant voxels were overlaid onto seven referenced neuronal cortical networks derived from the resting-state fMRI data of 1,000 healthy participants. Thirty-nine task-based fMRI studies (697 MCI patients and 628 healthy controls) were included in MCI-related meta-analysis while 36 task-based fMRI studies (421 AD patients and 512 healthy controls) were included in AD-related meta-analysis. The meta-analytic results revealed that MCI and AD showed abnormal regional brain activation as well as large-scale brain networks. MCI patients showed hypoactivation in default, frontoparietal, and visual networks relative to healthy controls, whereas AD-related hypoactivation mainly located in visual, default, and ventral attention networks relative to healthy controls. Both MCI-related and AD-related hyperactivation fell in frontoparietal, ventral attention, default, and somatomotor networks relative to healthy controls. MCI and AD presented different pathological while shared similar compensatory large-scale networks in fulfilling the cognitive tasks. These system-level findings are helpful to link the fundamental declines of cognitive tasks to brain networks in MCI and AD. © 2014 Wiley Periodicals, Inc.
Network analysis of exploratory behaviors of mice in a spatial learning and memory task
Suzuki, Yusuke
2017-01-01
The Barnes maze is one of the main behavioral tasks used to study spatial learning and memory. The Barnes maze is a task conducted on “dry land” in which animals try to escape from a brightly lit exposed circular open arena to a small dark escape box located under one of several holes at the periphery of the arena. In comparison with another classical spatial learning and memory task, the Morris water maze, the negative reinforcements that motivate animals in the Barnes maze are less severe and less stressful. Furthermore, the Barnes maze is more compatible with recently developed cutting-edge techniques in neural circuit research, such as the miniature brain endoscope or optogenetics. For this study, we developed a lift-type task start system and equipped the Barnes maze with it. The subject mouse is raised up by the lift and released into the maze automatically so that it can start navigating the maze smoothly from exactly the same start position across repeated trials. We believe that a Barnes maze test with a lift-type task start system may be useful for behavioral experiments when combined with head-mounted or wire-connected devices for online imaging and intervention in neural circuits. Furthermore, we introduced a network analysis method for the analysis of the Barnes maze data. Each animal’s exploratory behavior in the maze was visualized as a network of nodes and their links, and spatial learning in the maze is described by systematic changes in network structures of search behavior. Network analysis was capable of visualizing and quantitatively analyzing subtle but significant differences in an animal’s exploratory behavior in the maze. PMID:28700627
Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI
Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.
2017-01-01
Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
Xu, Junhai; Yin, Xuntao; Ge, Haitao; Han, Yan; Pang, Zengchang; Tang, Yuchun; Liu, Baolin; Liu, Shuwei
2015-01-01
Attention is a crucial brain function for human beings. Using neuropsychological paradigms and task-based functional brain imaging, previous studies have indicated that widely distributed brain regions are engaged in three distinct attention subsystems: alerting, orienting and executive control (EC). Here, we explored the potential contribution of spontaneous brain activity to attention by examining whether resting-state activity could account for individual differences of the attentional performance in normal individuals. The resting-state functional images and behavioral data from attention network test (ANT) task were collected in 59 healthy subjects. Graph analysis was conducted to obtain the characteristics of functional brain networks and linear regression analyses were used to explore their relationships with behavioral performances of the three attentional components. We found that there was no significant relationship between the attentional performance and the global measures, while the attentional performance was associated with specific local regional efficiency. These regions related to the scores of alerting, orienting and EC largely overlapped with the regions activated in previous task-related functional imaging studies, and were consistent with the intrinsic dorsal and ventral attention networks (DAN/VAN). In addition, the strong associations between the attentional performance and specific regional efficiency suggested that there was a possible relationship between the DAN/VAN and task performances in the ANT. We concluded that the intrinsic activity of the human brain could reflect the processing efficiency of the attention system. Our findings revealed a robust evidence for the functional significance of the efficiently organized intrinsic brain network for highly productive cognitions and the hypothesized role of the DAN/VAN at rest.
Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L.
2012-01-01
Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. Previous findings by our group strongly suggested that the changes in neural activity observed during increased cholinergic function may reflect an increase in neural efficiency that leads to improved task performance. The current study was designed to assess the effects of cholinergic enhancement on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover functional magnetic resonance imaging (fMRI) study. Following an infusion of physostigmine (1mg/hr) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions was reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Cholinergic enhancement also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus processing regions provide further support to the hypothesis that cholinergic augmentation results in enhanced neural efficiency. PMID:22906685
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.
GPFA-AB_Phase1UtilizationTask4DataUpload
Teresa E. Jordan
2015-09-30
This submission of Utilization Analysis data to the Geothermal Data Repository (GDR) node of the National Geothermal Data System (NGDS) is in support of Phase 1 Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (project DE-EE0006726). The submission includes data pertinent to the methods and results of an analysis of the Surface Levelized Cost of Heat (SLCOH) for US Census Bureau ‘Places’ within the study area. This was calculated using a modification of a program called GEOPHIRES, available at http://koenraadbeckers.net/geophires/index.php. The MATLAB modules used in conjunction with GEOPHIRES, the MATLAB data input file, the GEOPHIRES output data file, and an explanation of the software components have been provided. Results of the SLCOH analysis appear on 4 .png image files as mapped ‘risk’ of heat utilization. For each of the 4 image (.png) files, there is an accompanying georeferenced TIF (.tif) file by the same name. In addition to calculating SLCOH, this Task 4 also identified many sites that may be prospects for use of a geothermal district heating system, based on their size and industry, rather than on the SLCOH. An industry sorted listing of the sites (.xlsx) and a map of these sites plotted as a layer onto different iterations of maps combining the three geological risk factors (Thermal Quality, Natural Reservoir Quality, and Risk of Seismicity) has been provided. In addition to the 6 image (.png) files of the maps in this series, a shape (.shp) file and 7 associated files are included as well. Finally, supporting files (.pdf) describing the utilization analysis methodology and summarizing the anticipated permitting for a deep district heating system are supplied.
Toward an implicit measure of emotions: ratings of abstract images reveal distinct emotional states.
Bartoszek, Gregory; Cervone, Daniel
2017-11-01
Although implicit tests of positive and negative affect exist, implicit measures of distinct emotional states are scarce. Three experiments examined whether a novel implicit emotion-assessment task, the rating of emotion expressed in abstract images, would reveal distinct emotional states. In Experiment 1, participants exposed to a sadness-inducing story inferred more sadness, and less happiness, in abstract images. In Experiment 2, an anger-provoking interaction increased anger ratings. In Experiment 3, compared to neutral images, spider images increased fear ratings in spider-fearful participants but not in controls. In each experiment, the implicit task indicated elevated levels of the target emotion and did not indicate elevated levels of non-target negative emotions; the task thus differentiated among emotional states of the same valence. Correlations also supported the convergent and discriminant validity of the implicit task. Supporting the possibility that heuristic processes underlie the ratings, group differences were stronger among those who responded relatively quickly.
Bien, Nina; Sack, Alexander T
2014-07-01
In the current study we aimed to empirically test previously proposed accounts of a division of labour between the left and right posterior parietal cortices during visuospatial mental imagery. The representation of mental images in the brain has been a topic of debate for several decades. Although the posterior parietal cortex is involved bilaterally, previous studies have postulated that hemispheric specialisation might result in a division of labour between the left and right parietal cortices. In the current fMRI study, we used an elaborated version of a behaviourally-controlled spatial imagery paradigm, the mental clock task, which involves mental image generation and a subsequent spatial comparison between two angles. By systematically varying the difference between the two angles that are mentally compared, we induced a symbolic distance effect: smaller differences between the two angles result in higher task difficulty. We employed parametrically weighed brain imaging to reveal brain areas showing a graded activation pattern in accordance with the induced distance effect. The parametric difficulty manipulation influenced behavioural data and brain activation patterns in a similar matter. Moreover, since this difficulty manipulation only starts to play a role from the angle comparison phase onwards, it allows for a top-down dissociation between the initial mental image formation, and the subsequent angle comparison phase of the spatial imagery task. Employing parametrically weighed fMRI analysis enabled us to top-down disentangle brain activation related to mental image formation, and activation reflecting spatial angle comparison. The results provide first empirical evidence for the repeatedly proposed division of labour between the left and right posterior parietal cortices during spatial imagery. Copyright © 2014 Elsevier Inc. All rights reserved.
Fellah, Slim; Cheung, Yin T; Scoggins, Matthew A; Zou, Ping; Sabin, Noah D; Pui, Ching-Hon; Robison, Leslie L; Hudson, Melissa M; Ogg, Robert J; Krull, Kevin R
2018-05-21
The impact of contemporary chemotherapy treatment for childhood acute lymphoblastic leukemia on central nervous system activity is not fully appreciated. Neurocognitive testing and functional magnetic resonance imaging (fMRI) were obtained in 165 survivors five or more years postdiagnosis (average age = 14.4 years, 7.7 years from diagnosis, 51.5% males). Chemotherapy exposure was measured as serum concentration of methotrexate following high-dose intravenous injection. Neurocognitive testing included measures of attention and executive function. fMRI was obtained during completion of two tasks, the continuous performance task (CPT) and the attention network task (ANT). Image analysis was performed using Statistical Parametric Mapping software, with contrasts targeting sustained attention, alerting, orienting, and conflict. All statistical tests were two-sided. Compared with population norms, survivors demonstrated impairment on number-letter switching (P < .001, a measure of cognitive flexibility), which was associated with treatment intensity (P = .048). Task performance during fMRI was associated with neurocognitive dysfunction across multiple tasks. Regional brain activation was lower in survivors diagnosed at younger ages for the CPT (bilateral parietal and temporal lobes) and the ANT (left parietal and right hippocampus). With higher serum methotrexate exposure, CPT activation decreased in the right temporal and bilateral frontal and parietal lobes, but ANT alerting activation increased in the ventral frontal, insula, caudate, and anterior cingulate. Brain activation during attention and executive function tasks was associated with serum methotrexate exposure and age at diagnosis. These findings provide evidence for compromised and compensatory changes in regional brain function that may help clarify the neural substrates of cognitive deficits in acute lymphoblastic leukemia survivors.
Zanto, Theodore P; Pa, Judy; Gazzaley, Adam
2014-01-01
As the aging population grows, it has become increasingly important to carefully characterize amnestic mild cognitive impairment (aMCI), a preclinical stage of Alzheimer's disease (AD). Functional magnetic resonance imaging (fMRI) is a valuable tool for monitoring disease progression in selectively vulnerable brain regions associated with AD neuropathology. However, the reliability of fMRI data in longitudinal studies of older adults with aMCI is largely unexplored. To address this, aMCI participants completed two visual working tasks, a Delayed-Recognition task and a One-Back task, on three separate scanning sessions over a three-month period. Test-retest reliability of the fMRI blood oxygen level dependent (BOLD) activity was assessed using an intraclass correlation (ICC) analysis approach. Results indicated that brain regions engaged during the task displayed greater reliability across sessions compared to regions that were not utilized by the task. During task-engagement, differential reliability scores were observed across the brain such that the frontal lobe, medial temporal lobe, and subcortical structures exhibited fair to moderate reliability (ICC=0.3-0.6), while temporal, parietal, and occipital regions exhibited moderate to good reliability (ICC=0.4-0.7). Additionally, reliability across brain regions was more stable when three fMRI sessions were used in the ICC calculation relative to two fMRI sessions. In conclusion, the fMRI BOLD signal is reliable across scanning sessions in this population and thus a useful tool for tracking longitudinal change in observational and interventional studies in aMCI. © 2013.
A framework for optimizing micro-CT in dual-modality micro-CT/XFCT small-animal imaging system
NASA Astrophysics Data System (ADS)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Cho, Sang Hyun
2017-09-01
Dual-modality Computed Tomography (CT)/X-ray Fluorescence Computed Tomography (XFCT) can be a valuable tool for imaging and quantifying the organ and tissue distribution of small concentrations of high atomic number materials in small-animal system. In this work, the framework for optimizing the micro-CT imaging system component of the dual-modality system is described, either when the micro-CT images are concurrently acquired with XFCT and using the x-ray spectral conditions for XFCT, or when the micro-CT images are acquired sequentially and independently of XFCT. This framework utilizes the cascaded systems analysis for task-specific determination of the detectability index using numerical observer models at a given radiation dose, where the radiation dose is determined using Monte Carlo simulations.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John
2002-01-01
The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.
Yang, Li-Zhuang; Shi, Bin; Li, Hai; Zhang, Wei; Liu, Ying; Wang, Hongzhi; Zhou, Yanfei; Wang, Ying; Lv, Wanwan; Ji, Xuebing; Hudak, Justin; Zhou, Yifeng; Fallgatter, Andreas J; Zhang, Xiaochu
2017-08-01
Applying electrical stimulation over the prefrontal cortex can help nicotine dependents reduce cigarette craving. However, the underlying mechanism remains ambiguous. This study investigates this issue with functional magnetic resonance imaging. Thirty-two male chronic smokers received real and sham stimulation over dorsal lateral prefrontal cortex (DLPFC) separated by 1 week. The neuroimaging data of the resting state, the smoking cue-reactivity task and the emotion task after stimulation were collected. The craving across the cue-reactivity task was diminished during real stimulation as compared with sham stimulation. The whole-brain analysis on the cue-reactivity task revealed a significant interaction between the stimulation condition (real vs sham) and the cue type (smoking vs neutral) in the left superior frontal gyrus and the left middle frontal gyrus. The functional connectivity between the left DLPFC and the right parahippocampal gyrus, as revealed by both psychophysical interaction analysis and the resting state functional connectivity, is altered by electrical stimulation. Moreover, the craving change across the real and sham condition is predicted by alteration of functional connectivity revealed by psychophysical interaction analysis. The local and long-distance coupling, altered by the electrical stimulation, might be the underlying neural mechanism of craving regulation. © The Author (2017). Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Sun, Jinyan; Sun, Bailei; Luo, Qingming; Gong, Hui
2014-05-01
Near-infrared spectroscopy (NIRS) is a developing and promising functional brain imaging technology. Developing data analysis methods to effectively extract meaningful information from collected data is the major bottleneck in popularizing this technology. In this study, we measured hemodynamic activity of the prefrontal cortex (PFC) during a color-word matching Stroop task using NIRS. Hemispheric lateralization was examined by employing traditional activation and novel NIRS-based connectivity analyses simultaneously. Wavelet transform coherence was used to assess intrahemispheric functional connectivity. Spearman correlation analysis was used to examine the relationship between behavioral performance and activation/functional connectivity, respectively. In agreement with activation analysis, functional connectivity analysis revealed leftward lateralization for the Stroop effect and correlation with behavioral performance. However, functional connectivity was more sensitive than activation for identifying hemispheric lateralization. Granger causality was used to evaluate the effective connectivity between hemispheres. The results showed increased information flow from the left to the right hemispheres for the incongruent versus the neutral task, indicating a leading role of the left PFC. This study demonstrates that the NIRS-based connectivity can reveal the functional architecture of the brain more comprehensively than traditional activation, helping to better utilize the advantages of NIRS.
Sun, Li; Liang, Peipeng; Jia, Xiuqin; Qi, Zhigang; Li, Kuncheng
2014-01-01
Objective: Recent neuroimaging studies have shown that elderly adults exhibit increased and decreased activation on various cognitive tasks, yet little is known about age-related changes in inductive reasoning. Methods: To investigate the neural basis for the aging effect on inductive reasoning, 15 young and 15 elderly subjects performed numerical inductive reasoning while in a magnetic resonance (MR) scanner. Results: Functional magnetic resonance imaging (fMRI) analysis revealed that numerical inductive reasoning, relative to rest, yielded multiple frontal, temporal, parietal, and some subcortical area activations for both age groups. In addition, the younger participants showed significant regions of task-induced deactivation, while no deactivation occurred in the elderly adults. Direct group comparisons showed that elderly adults exhibited greater activity in regions of task-related activation and areas showing task-induced deactivation (TID) in the younger group. Conclusions: Our findings suggest an age-related deficiency in neural function and resource allocation during inductive reasoning. PMID:25337240
Ingham, Roger J.; Grafton, Scott T.; Bothe, Anne K.; Ingham, Janis C.
2012-01-01
Many differences in brain activity have been reported between persons who stutter (PWS) and typically fluent controls during oral reading tasks. An earlier meta-analysis of imaging studies identified stutter-related regions, but recent studies report less agreement with those regions. A PET study on adult dextral PWS (n = 18) and matched fluent controls (CONT, n = 12) is reported that used both oral reading and monologue tasks. After correcting for speech rate differences between the groups the task-activation differences were surprisingly small. For both analyses only some regions previously considered stutter-related were more activated in the PWS group than in the CONT group, and these were also activated during eyes-closed rest (ECR). In the PWS group, stuttering frequency was correlated with cortico-striatal-thalamic circuit activity in both speaking tasks. The neuroimaging findings for the PWS group, relative to the CONT group, appear consistent with neuroanatomic abnormalities being increasingly reported among PWS. PMID:22564749
Monkeys Rely on Recency of Stimulus Repetition When Solving Short-Term Memory Tasks
ERIC Educational Resources Information Center
Wittig, John H., Jr.; Richmond, Barry J.
2014-01-01
Seven monkeys performed variants of two short-term memory tasks that others have used to differentiate between selective and nonselective memory mechanisms. The first task was to view a list of sequentially presented images and identify whether a test matched any image from the list, but not a distractor from a preceding list. Performance was best…
Multimodal Task-Driven Dictionary Learning for Image Classification
2015-12-18
1 Multimodal Task-Driven Dictionary Learning for Image Classification Soheil Bahrampour, Student Member, IEEE, Nasser M. Nasrabadi, Fellow, IEEE...Asok Ray, Fellow, IEEE, and W. Kenneth Jenkins, Life Fellow, IEEE Abstract— Dictionary learning algorithms have been suc- cessfully used for both...reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsson, Jakob C., E-mail: jakob.larsson@biox.kth.se; Lundström, Ulf; Hertz, Hans M.
2016-06-15
Purpose: High-spatial-resolution x-ray imaging in the few-ten-keV range is becoming increasingly important in several applications, such as small-animal imaging and phase-contrast imaging. The detector properties critically influence the quality of such imaging. Here the authors present a quantitative comparison of scintillator-based detectors for this energy range and at high spatial frequencies. Methods: The authors determine the modulation transfer function, noise power spectrum (NPS), and detective quantum efficiency for Gadox, needle CsI, and structured CsI scintillators of different thicknesses and at different photon energies. An extended analysis of the NPS allows for direct measurements of the scintillator effective absorption efficiency andmore » effective light yield as well as providing an alternative method to assess the underlying factors behind the detector properties. Results: There is a substantial difference in performance between the scintillators depending on the imaging task but in general, the CsI based scintillators perform better than the Gadox scintillators. At low energies (16 keV), a thin needle CsI scintillator has the best performance at all frequencies. At higher energies (28–38 keV), the thicker needle CsI scintillators and the structured CsI scintillator all have very good performance. The needle CsI scintillators have higher absorption efficiencies but the structured CsI scintillator has higher resolution. Conclusions: The choice of scintillator is greatly dependent on the imaging task. The presented comparison and methodology will assist the imaging scientist in optimizing their high-resolution few-ten-keV imaging system for best performance.« less
Larsson, Jakob C; Lundström, Ulf; Hertz, Hans M
2016-06-01
High-spatial-resolution x-ray imaging in the few-ten-keV range is becoming increasingly important in several applications, such as small-animal imaging and phase-contrast imaging. The detector properties critically influence the quality of such imaging. Here the authors present a quantitative comparison of scintillator-based detectors for this energy range and at high spatial frequencies. The authors determine the modulation transfer function, noise power spectrum (NPS), and detective quantum efficiency for Gadox, needle CsI, and structured CsI scintillators of different thicknesses and at different photon energies. An extended analysis of the NPS allows for direct measurements of the scintillator effective absorption efficiency and effective light yield as well as providing an alternative method to assess the underlying factors behind the detector properties. There is a substantial difference in performance between the scintillators depending on the imaging task but in general, the CsI based scintillators perform better than the Gadox scintillators. At low energies (16 keV), a thin needle CsI scintillator has the best performance at all frequencies. At higher energies (28-38 keV), the thicker needle CsI scintillators and the structured CsI scintillator all have very good performance. The needle CsI scintillators have higher absorption efficiencies but the structured CsI scintillator has higher resolution. The choice of scintillator is greatly dependent on the imaging task. The presented comparison and methodology will assist the imaging scientist in optimizing their high-resolution few-ten-keV imaging system for best performance.
Zago, Laure; Petit, Laurent; Jobard, Gael; Hay, Julien; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie; Karnath, Hans-Otto; Mellet, Emmanuel
2017-01-08
The objective of this study was to validate a line bisection judgement (LBJ) task for use in investigating the lateralized cerebral bases of spatial attention in a sample of 51 right-handed healthy participants. Using functional magnetic resonance imaging (fMRI), the participants performed a LBJ task that was compared to a visuomotor control task during which the participants made similar saccadic and motoric responses. Cerebral lateralization was determined using a voxel-based functional asymmetry analysis and a hemispheric functional lateralization index (HFLI) computed from fMRI contrast images. Behavioural attentional deviation biases were assessed during the LBJ task and a "paper and pencil" symbol cancellation task (SCT). Individual visuospatial skills were also evaluated. The results showed that both the LBJ and SCT tasks elicited leftward spatial biases in healthy subjects, although the biases were not correlated, which indicated their independence. Neuroimaging results showed that the LBJ task elicited a right hemispheric lateralization, with rightward asymmetries found in a large posterior occipito-parietal area, the posterior calcarine sulcus (V1p) and the temporo-occipital junction (TOJ) and in the inferior frontal gyrus, the anterior insula and the superior medial frontal gyrus. The comparison of the LBJ asymmetry map to the lesion map of neglect patients who suffer line bisection deviation demonstrated maximum overlap in a network that included the middle occipital gyrus (MOG), the TOJ, the anterior insula and the inferior frontal region, likely subtending spatial LBJ bias. Finally, the LBJ task-related cerebral lateralization was specifically correlated with the LBJ spatial bias but not with the SCT bias or with the visuospatial skills of the participants. Taken together, these results demonstrated that the LBJ task is adequate for investigating spatial lateralization in healthy subjects and is suitable for determining the factors underlying the variability of spatial cerebral lateralization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lessons Learned from Crowdsourcing Complex Engineering Tasks.
Staffelbach, Matthew; Sempolinski, Peter; Kijewski-Correa, Tracy; Thain, Douglas; Wei, Daniel; Kareem, Ahsan; Madey, Gregory
2015-01-01
Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations. Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data. We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task. With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems.
NASA Astrophysics Data System (ADS)
Bykovskii, Yurii A.; Markilov, A. A.; Rodin, V. G.; Starikov, S. N.
1995-10-01
A description is given of systems with spatially incoherent illumination, intended for spectral and correlation analysis, and for the recording of Fourier holograms. These systems make use of transformation of the degree of the spatial coherence of light. The results are given of the processing of images and signals, including those transmitted by a bundle of fibre-optic waveguides both as monochromatic light and as quasimonochromatic radiation from a cathode-ray tube. The feasibility of spatial frequency filtering and of correlation analysis of images with a bipolar impulse response is considered for systems with spatially incoherent illumination where these tasks are performed by double transformation of the spatial coherence of light. A description is given of experimental systems and the results of image processing are reported.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Processing Distracting Non-face Emotional Images: No Evidence of an Age-Related Positivity Effect
Madill, Mark; Murray, Janice E.
2017-01-01
Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64–90 years) and 25 young adults (19–29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63–84 years) and 30 young adults (18–30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially process task-relevant positive non-face images but only when presented within the main focus of attention. PMID:28450848
Processing Distracting Non-face Emotional Images: No Evidence of an Age-Related Positivity Effect.
Madill, Mark; Murray, Janice E
2017-01-01
Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64-90 years) and 25 young adults (19-29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63-84 years) and 30 young adults (18-30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially process task-relevant positive non-face images but only when presented within the main focus of attention.
Image processing and recognition for biological images.
Uchida, Seiichi
2013-05-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Baracat, Patrícia Junqueira Ferraz; de Sá Ferreira, Arthur
2013-12-01
The present study investigated the association between postural tasks and center of pressure spatial patterns of three-dimensional statokinesigrams. Young (n=35; 27.0±7.7years) and elderly (n=38; 67.3±8.7years) healthy volunteers maintained an undisturbed standing position during postural tasks characterized by combined sensory (vision/no vision) and biomechanical challenges (feet apart/together). A method for the analysis of three-dimensional statokinesigrams based on nonparametric statistics and image-processing analysis was employed. Four patterns of spatial distribution were derived from ankle and hip strategies according to the quantity (single; double; multi) and location (anteroposterior; mediolateral) of high-density regions on three-dimensional statokinesigrams. Significant associations between postural task and spatial pattern were observed (young: gamma=0.548, p<.001; elderly: gamma=0.582, p<.001). Robustness analysis revealed small changes related to parameter choices for histogram processing. MANOVA revealed multivariate main effects for postural task [Wilks' Lambda=0.245, p<.001] and age [Wilks' Lambda=0.308, p<.001], with interaction [Wilks' Lambda=0.732, p<.001]. The quantity of high-density regions was positively correlated to stabilogram and statokinesigram variables (p<.05 or lower). In conclusion, postural tasks are associated with center of pressure spatial patterns and are similar in young and elderly healthy volunteers. Single-centered patterns reflected more stable postural conditions and were more frequent with complete visual input and a wide base of support. Copyright © 2013 Elsevier B.V. All rights reserved.
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Coactivation of cognitive control networks during task switching.
Yin, Shouhang; Deák, Gedeon; Chen, Antao
2018-01-01
The ability to flexibly switch between tasks is considered an important component of cognitive control that involves frontal and parietal cortical areas. The present study was designed to characterize network dynamics across multiple brain regions during task switching. Functional magnetic resonance images (fMRI) were captured during a standard rule-switching task to identify switching-related brain regions. Multiregional psychophysiological interaction (PPI) analysis was used to examine effective connectivity between these regions. During switching trials, behavioral performance declined and activation of a generic cognitive control network increased. Concurrently, task-related connectivity increased within and between cingulo-opercular and fronto-parietal cognitive control networks. Notably, the left inferior frontal junction (IFJ) was most consistently coactivated with the 2 cognitive control networks. Furthermore, switching-dependent effective connectivity was negatively correlated with behavioral switch costs. The strength of effective connectivity between left IFJ and other regions in the networks predicted individual differences in switch costs. Task switching was supported by coactivated connections within cognitive control networks, with left IFJ potentially acting as a key hub between the fronto-parietal and cingulo-opercular networks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Partially converted stereoscopic images and the effects on visual attention and memory
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi
2015-03-01
This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.
Calhoun, Vince D; Kiehl, Kent A; Pearlson, Godfrey D
2008-07-01
Brain regions which exhibit temporally coherent fluctuations, have been increasingly studied using functional magnetic resonance imaging (fMRI). Such networks are often identified in the context of an fMRI scan collected during rest (and thus are called "resting state networks"); however, they are also present during (and modulated by) the performance of a cognitive task. In this article, we will refer to such networks as temporally coherent networks (TCNs). Although there is still some debate over the physiological source of these fluctuations, TCNs are being studied in a variety of ways. Recent studies have examined ways TCNs can be used to identify patterns associated with various brain disorders (e.g. schizophrenia, autism or Alzheimer's disease). Independent component analysis (ICA) is one method being used to identify TCNs. ICA is a data driven approach which is especially useful for decomposing activation during complex cognitive tasks where multiple operations occur simultaneously. In this article we review recent TCN studies with emphasis on those that use ICA. We also present new results showing that TCNs are robust, and can be consistently identified at rest and during performance of a cognitive task in healthy individuals and in patients with schizophrenia. In addition, multiple TCNs show temporal and spatial modulation during the cognitive task versus rest. In summary, TCNs show considerable promise as potential imaging biological markers of brain diseases, though each network needs to be studied in more detail. (c) 2008 Wiley-Liss, Inc.
Calhoun, Vince D.; Kiehl, Kent A.; Pearlson, Godfrey D.
2009-01-01
Brain regions which exhibit temporally coherent fluctuations, have been increasingly studied using functional magnetic resonance imaging (fMRI). Such networks are often identified in the context of an fMRI scan collected during rest (and thus are called “resting state networks”); however, they are also present during (and modulated by) the performance of a cognitive task. In this article, we will refer to such networks as temporally coherent networks (TCNs). Although there is still some debate over the physiological source of these fluctuations, TCNs are being studied in a variety of ways. Recent studies have examined ways TCNs can be used to identify patterns associated with various brain disorders (e.g. schizophrenia, autism or Alzheimer’s disease). Independent component analysis (ICA) is one method being used to identify TCNs. ICA is a data driven approach which is especially useful for decomposing activation during complex cognitive tasks where multiple operations occur simultaneously. In this article we review recent TCN studies with emphasis on those that use ICA. We also present new results showing that TCNs are robust, and can be consistently identified at rest and during performance of a cognitive task in healthy individuals and in patients with schizophrenia. In addition, multiple TCNs show temporal and spatial modulation during the cognitive task versus rest. In summary, TCNs show considerable promise as potential imaging biological markers of brain diseases, though each network needs to be studied in more detail. PMID:18438867
MultiDrizzle: An Integrated Pyraf Script for Registering, Cleaning and Combining Images
NASA Astrophysics Data System (ADS)
Koekemoer, A. M.; Fruchter, A. S.; Hook, R. N.; Hack, W.
We present the new PyRAF-based `MultiDrizzle' script, which is aimed at providing a one-step approach to combining dithered HST images. The purpose of this script is to allow easy interaction with the complex suite of tasks in the IRAF/STSDAS `dither' package, as well as the new `PyDrizzle' task, while at the same time retaining the flexibility of these tasks through a number of parameters. These parameters control the various individual steps, such as sky subtraction, image registration, `drizzling' onto separate output images, creation of a clean median image, transformation of the median with `blot' and creation of cosmic ray masks, as well as the final image combination step using `drizzle'. The default parameters of all the steps are set so that the task will work automatically for a wide variety of different types of images, while at the same time allowing adjustment of individual parameters for special cases. The script currently works for both ACS and WFPC2 data, and is now being tested on STIS and NICMOS images. We describe the operation of the script and the effect of various parameters, particularly in the context of combining images from dithered observations using ACS and WFPC2. Additional information is also available at the `MultiDrizzle' home page: http://www.stsci.edu/~koekemoe/multidrizzle/
Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram
2017-07-11
The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).
A method to classify schizophrenia using inter-task spatial correlations of functional brain images.
Michael, Andrew M; Calhoun, Vince D; Andreasen, Nancy C; Baum, Stefi A
2008-01-01
The clinical heterogeneity of schizophrenia (scz) and the overlap of self reported and observed symptoms with other mental disorders makes its diagnosis a difficult task. At present no laboratory-based or image-based diagnostic tool for scz exists and such tools are desired to support existing methods for more precise diagnosis. Functional magnetic resonance imaging (fMRI) is currently employed to identify and correlate cognitive processes related to scz and its symptoms. Fusion of multiple fMRI tasks that probe different cognitive processes may help to better understand hidden networks of this complex disorder. In this paper we utilize three different fMRI tasks and introduce an approach to classify subjects based on inter-task spatial correlations of brain activation. The technique was applied to groups of patients and controls and its validity was checked with the leave-one-out method. We show that the classification rate increases when information from multiple tasks are combined.
Campagnola, Luke; Kratz, Megan B; Manis, Paul B
2014-01-01
The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.
Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer
2009-10-01
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
NASA Astrophysics Data System (ADS)
Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.
2017-02-01
The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies.
Cognitive approaches for patterns analysis and security applications
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Ogiela, Lidia
2017-08-01
In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.
High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms
Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.
2014-01-01
Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.
1996-01-01
The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.
Time-lapse microscopy using smartphone with augmented reality markers.
Baek, Dongyoub; Cho, Sungmin; Yun, Kyungwon; Youn, Keehong; Bang, Hyunwoo
2014-04-01
A prototype system that replaces the conventional time-lapse imaging in microscopic inspection for use with smartphones is presented. Existing time-lapse imaging requires a video data feed between a microscope and a computer that varies depending on the type of image grabber. Even with proper hardware setups, a series of tedious and repetitive tasks is still required to relocate to the region-of-interest (ROI) of the specimens. In order to simplify the system and improve the efficiency of time-lapse imaging tasks, a smartphone-based platform utilizing microscopic augmented reality (μ-AR) markers is proposed. To evaluate the feasibility and efficiency of the proposed system, a user test was designed and performed, measuring the elapse time for a trial of the task starting from the execution of the application software to the completion of restoring and imaging of an ROI saved in advance. The results of the user test showed that the average elapse time was 65.3 ± 15.2 s with 6.86 ± 3.61 μm of position error and 0.08 ± 0.40 degrees of angle error. This indicates that the time-lapse imaging task was accomplished rapidly with a high level of accuracy. Thus, simplification of both the system and the task was achieved via our proposed system. Copyright © 2014 Wiley Periodicals, Inc.
Prisman, Eitan; Daly, Michael J; Chan, Harley; Siewerdsen, Jeffrey H; Vescan, Allan; Irish, Jonathan C
2011-01-01
Custom software was developed to integrate intraoperative cone-beam computed tomography (CBCT) images with endoscopic video for surgical navigation and guidance. A cadaveric head was used to assess the accuracy and potential clinical utility of the following functionality: (1) real-time tracking of the endoscope in intraoperative 3-dimensional (3D) CBCT; (2) projecting an orthogonal reconstructed CBCT image, at or beyond the endoscope, which is parallel to the tip of the endoscope corresponding to the surgical plane; (3) virtual reality fusion of endoscopic video and 3D CBCT surface rendering; and (4) overlay of preoperatively defined contours of anatomical structures of interest. Anatomical landmarks were contoured in CBCT of a cadaveric head. An experienced endoscopic surgeon was oriented to the software and asked to rate the utility of the navigation software in carrying out predefined surgical tasks. Utility was evaluated using a rating scale for: (1) safely completing the task; and (2) potential for surgical training. Surgical tasks included: (1) uncinectomy; (2) ethmoidectomy; (3) sphenoidectomy/pituitary resection; and (4) clival resection. CBCT images were updated following each ablative task. As a teaching tool, the software was evaluated as "very useful" for all surgical tasks. Regarding safety and task completion, the software was evaluated as "no advantage" for task (1), "minimal" for task (2), and "very useful" for tasks (3) and (4). Landmark identification for structures behind bone was "very useful" for both categories. The software increased surgical confidence in safely completing challenging ablative tasks by presenting real-time image guidance for highly complex ablative procedures. In addition, such technology offers a valuable teaching aid to surgeons in training. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
ERIC Educational Resources Information Center
Dahl, Bettina
2017-01-01
Six US first-year university students in humanities or social science degree programmes were interviewed while solving 4 tasks on continuity and asymptotes in a required mathematics course. The focus was on how the students referred to the definitions or to the concept images when solving the tasks and if partial understandings appeared. Partial…
Engineering the Ideal Gigapixel Image Viewer
NASA Astrophysics Data System (ADS)
Perpeet, D. Wassenberg, J.
2011-09-01
Despite improvements in automatic processing, analysts are still faced with the task of evaluating gigapixel-scale mosaics or images acquired by telescopes such as Pan-STARRS. Displaying such images in ‘ideal’ form is a major challenge even today, and the amount of data will only increase as sensor resolutions improve. In our opinion, the ideal viewer has several key characteristics. Lossless display - down to individual pixels - ensures all information can be extracted from the image. Support for all relevant pixel formats (integer or floating point) allows displaying data from different sensors. Smooth zooming and panning in the high-resolution data enables rapid screening and navigation in the image. High responsiveness to input commands avoids frustrating delays. Instantaneous image enhancement, e.g. contrast adjustment and image channel selection, helps with analysis tasks. Modest system requirements allow viewing on regular workstation computers or even laptops. To the best of our knowledge, no such software product is currently available. Meeting these goals requires addressing certain realities of current computer architectures. GPU hardware accelerates rendering and allows smooth zooming without high CPU load. Programmable GPU shaders enable instant channel selection and contrast adjustment without any perceptible slowdown or changes to the input data. Relatively low disk transfer speeds suggest the use of compression to decrease the amount of data to transfer. Asynchronous I/O allows decompressing while waiting for previous I/O operations to complete. The slow seek times of magnetic disks motivate optimizing the order of the data on disk. Vectorization and parallelization allow significant increases in computational capacity. Limited memory requires streaming and caching of image regions. We develop a viewer that takes the above issues into account. Its awareness of the computer architecture enables previously unattainable features such as smooth zooming and image enhancement within high-resolution data. We describe our implementation, disclosing its novel file format and lossless image codec whose decompression is faster than copying the raw data in memory. Both provide crucial performance boosts compared to conventional approaches. Usability tests demonstrate the suitability of our viewer for rapid analysis of large SAR datasets, multispectral satellite imagery and mosaics.
Extracting Intrinsic Functional Networks with Feature-Based Group Independent Component Analysis
ERIC Educational Resources Information Center
Calhoun, Vince D.; Allen, Elena
2013-01-01
There is increasing use of functional imaging data to understand the macro-connectome of the human brain. Of particular interest is the structure and function of intrinsic networks (regions exhibiting temporally coherent activity both at rest and while a task is being performed), which account for a significant portion of the variance in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.L.
A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less
Zhao, Qing; Li, Zhi; Huang, Jia; Yan, Chao; Dazzan, Paola; Pantelis, Christos; Cheung, Eric F C; Lui, Simon S Y; Chan, Raymond C K
2014-05-01
Neurological soft signs (NSS) are associated with schizophrenia and related psychotic disorders. NSS have been conventionally considered as clinical neurological signs without localized brain regions. However, recent brain imaging studies suggest that NSS are partly localizable and may be associated with deficits in specific brain areas. We conducted an activation likelihood estimation meta-analysis to quantitatively review structural and functional imaging studies that evaluated the brain correlates of NSS in patients with schizophrenia and other psychotic disorders. Six structural magnetic resonance imaging (sMRI) and 15 functional magnetic resonance imaging (fMRI) studies were included. The results from meta-analysis of the sMRI studies indicated that NSS were associated with atrophy of the precentral gyrus, the cerebellum, the inferior frontal gyrus, and the thalamus. The results from meta-analysis of the fMRI studies demonstrated that the NSS-related task was significantly associated with altered brain activation in the inferior frontal gyrus, bilateral putamen, the cerebellum, and the superior temporal gyrus. Our findings from both sMRI and fMRI meta-analyses further support the conceptualization of NSS as a manifestation of the "cerebello-thalamo-prefrontal" brain network model of schizophrenia and related psychotic disorders.
NASA Astrophysics Data System (ADS)
Brun, F.; Intranuovo, F.; Mohammadi, S.; Domingos, M.; Favia, P.; Tromba, G.
2013-07-01
The technique used to produce a 3D tissue engineering (TE) scaffold is of fundamental importance in order to guarantee its proper morphological characteristics. An accurate assessment of the resulting structural properties is therefore crucial in order to evaluate the effectiveness of the produced scaffold. Synchrotron radiation (SR) computed microtomography (μ-CT) combined with further image analysis seems to be one of the most effective techniques to this aim. However, a quantitative assessment of the morphological parameters directly from the reconstructed images is a non trivial task. This study considers two different poly(ε-caprolactone) (PCL) scaffolds fabricated with a conventional technique (Solvent Casting Particulate Leaching, SCPL) and an additive manufacturing (AM) technique (BioCell Printing), respectively. With the first technique it is possible to produce scaffolds with random, non-regular, rounded pore geometry. The AM technique instead is able to produce scaffolds with square-shaped interconnected pores of regular dimension. Therefore, the final morphology of the AM scaffolds can be predicted and the resulting model can be used for the validation of the applied imaging and image analysis protocols. It is here reported a SR μ-CT image analysis approach that is able to effectively and accurately reveal the differences in the pore- and throat-size distributions as well as connectivity of both AM and SCPL scaffolds.