Sample records for visual image interpretation

  1. How Chinese Semantics Capability Improves Interpretation in Visual Communication

    ERIC Educational Resources Information Center

    Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung

    2017-01-01

    A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…

  2. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  3. Limited diagnostic value of Dual-Time-Point (18)F-FDG PET/CT imaging for classifying solitary pulmonary nodules in granuloma-endemic regions both at visual and quantitative analyses.

    PubMed

    Chen, Song; Li, Xuena; Chen, Meijie; Yin, Yafu; Li, Na; Li, Yaming

    2016-10-01

    This study is aimed to compare the diagnostic power of using quantitative analysis or visual analysis with single time point imaging (STPI) PET/CT and dual time point imaging (DTPI) PET/CT for the classification of solitary pulmonary nodules (SPN) lesions in granuloma-endemic regions. SPN patients who received early and delayed (18)F-FDG PET/CT at 60min and 180min post-injection were retrospectively reviewed. Diagnoses are confirmed by pathological results or follow-ups. Three quantitative metrics, early SUVmax, delayed SUVmax and retention index(the percentage changes between the early SUVmax and delayed SUVmax), were measured for each lesion. Three 5-point scale score was given by blinded interpretations performed by physicians based on STPI PET/CT images, DTPI PET/CT images and CT images, respectively. ROC analysis was performed on three quantitative metrics and three visual interpretation scores. One-hundred-forty-nine patients were retrospectively included. The areas under curve (AUC) of the ROC curves of early SUVmax, delayed SUVmax, RI, STPI PET/CT score, DTPI PET/CT score and CT score are 0.73, 0.74, 0.61, 0.77 0.75 and 0.76, respectively. There were no significant differences between the AUCs in visual interpretation of STPI PET/CT images and DTPI PET/CT images, nor in early SUVmax and delayed SUVmax. The differences of sensitivity, specificity and accuracy between STPI PET/CT and DTPI PET/CT were not significantly different in either quantitative analysis or visual interpretation. In granuloma-endemic regions, DTPI PET/CT did not offer significant improvement over STPI PET/CT in differentiating malignant SPNs in both quantitative analysis and visual interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Assessing change in large-scale forest area by visually interpreting Landsat images

    Treesearch

    Jerry D. Greer; Frederick P. Weber; Raymond L. Czaplewski

    2000-01-01

    As part of the Forest Resources Assessment 1990, the Food and Agriculture Organization of the United Nations visually interpreted a stratified random sample of 117 Landsat scenes to estimate global status and change in tropical forest area. Images from 1980 and 1990 were interpreted by a group of widely experienced technical people in many different tropical countries...

  5. Analysis of Visual Interpretation of Satellite Data

    NASA Astrophysics Data System (ADS)

    Svatonova, H.

    2016-06-01

    Millions of people of all ages and expertise are using satellite and aerial data as an important input for their work in many different fields. Satellite data are also gradually finding a new place in education, especially in the fields of geography and in environmental issues. The article presents the results of an extensive research in the area of visual interpretation of image data carried out in the years 2013 - 2015 in the Czech Republic. The research was aimed at comparing the success rate of the interpretation of satellite data in relation to a) the substrates (to the selected colourfulness, the type of depicted landscape or special elements in the landscape) and b) to selected characteristics of users (expertise, gender, age). The results of the research showed that (1) false colour images have a slightly higher percentage of successful interpretation than natural colour images, (2) colourfulness of an element expected or rehearsed by the user (regardless of the real natural colour) increases the success rate of identifying the element (3) experts are faster in interpreting visual data than non-experts, with the same degree of accuracy of solving the task, and (4) men and women are equally successful in the interpretation of visual image data.

  6. Early Detection of Clinically Significant Prostate Cancer Using Ultrasonic Acoustic Radiation Force Impulse (ARFI) Imaging

    DTIC Science & Technology

    2017-10-01

    Toolkit for rapid 3D visualization and image volume interpretation, followed by automated transducer positioning in a user-selected image plane for... Toolkit (IGSTK) to enable rapid 3D visualization and image volume interpretation followed by automated transducer positioning in the user-selected... careers in science, technology, and the humanities. What do you plan to do during the next reporting period to accomplish the goals? If this

  7. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  8. Interobserver variability in the radiological assessment of magnetic resonance imaging (MRI) including perfusion MRI in glioblastoma multiforme.

    PubMed

    Kerkhof, M; Hagenbeek, R E; van der Kallen, B F W; Lycklama À Nijeholt, G J; Dirven, L; Taphoorn, M J B; Vos, M J

    2016-10-01

    Conventional magnetic resonance imaging (MRI) has limited value for differentiation of true tumor progression and pseudoprogression in treated glioblastoma multiforme (GBM). Perfusion weighted imaging (PWI) may be helpful in the differentiation of these two phenomena. Here interobserver variability in routine radiological evaluation of GBM patients is assessed using MRI, including PWI. Three experienced neuroradiologists evaluated MR scans of 28 GBM patients during temozolomide chemoradiotherapy at three time points: preoperative (MR1) and postoperative (MR2) MR scan and the follow-up MR scan after three cycles of adjuvant temozolomide (MR3). Tumor size was measured both on T1 post-contrast and T2 weighted images according to the Response Assessment in Neuro-Oncology criteria. PW images of MR3 were evaluated by visual inspection of relative cerebral blood volume (rCBV) color maps and by quantitative rCBV measurements of enhancing areas with highest rCBV. Image interpretability of PW images was also scored. Finally, the neuroradiologists gave a conclusion on tumor status, based on the interpretation of both T1 and T2 weighted images (MR1, MR2 and MR3) in combination with PWI (MR3). Interobserver agreement on visual interpretation of rCBV maps was good (κ = 0.63) but poor on quantitative rCBV measurements and on interpretability of perfusion images (intraclass correlation coefficient 0.37 and κ = 0.23, respectively). Interobserver agreement on the overall conclusion of tumor status was moderate (κ = 0.48). Interobserver agreement on the visual interpretation of PWI color maps was good. However, overall interpretation of MR scans (using both conventional and PW images) showed considerable interobserver variability. Therefore, caution should be applied when interpreting MRI results during chemoradiation therapy. © 2016 EAN.

  9. SEEING IS BELIEVING, AND BELIEVING IS SEEING

    NASA Astrophysics Data System (ADS)

    Dutrow, B. L.

    2009-12-01

    Geoscience disciplines are filled with visual displays of data. From the first cave drawings to remote imaging of our Planet, visual displays of information have been used to understand and interpret our discipline. As practitioners of the art, visuals comprise the core around which we write scholarly articles, teach our students and make every day decisions. The effectiveness of visual communication, however, varies greatly. For many visual displays, a significant amount of prior knowledge is needed to understand and interpret various representations. If this is missing, key components of communication fail. One common example is the use of animations to explain high density and typically complex data. Do animations effectively convey information, simply "wow an audience" or do they confuse the subject by using unfamiliar forms and representations? Prior knowledge impacts the information derived from visuals and when communicating with non-experts this factor is exacerbated. For example, in an advanced geology course fractures in a rock are viewed by petroleum engineers as conduits for fluid migration while geoscience students 'see' the minerals lining the fracture. In contrast, a lay audience might view these images as abstract art. Without specific and direct accompanying verbal or written communication such an image is viewed radically differently by disparate audiences. Experts and non-experts do not 'see' equivalent images. Each visual must be carefully constructed with it's communication task in mind. To enhance learning and communication at all levels by visual displays of data requires that we teach visual literacy as a portion of our curricula. As we move from one form of visual representation to another, our mental images are expanded as is our ability to see and interpret new visual forms thus promoting life-long learning. Visual literacy is key to communication in our visually rich discipline. What do you see?

  10. Method of interpretation of remotely sensed data and applications to land use

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dossantos, A. P.; Foresti, C.; Demoraesnovo, E. M. L.; Niero, M.; Lombardo, M. A.

    1981-01-01

    Instructional material describing a methodology of remote sensing data interpretation and examples of applicatons to land use survey are presented. The image interpretation elements are discussed for different types of sensor systems: aerial photographs, radar, and MSS/LANDSAT. Visual and automatic LANDSAT image interpretation is emphasized.

  11. Use of Visual Cues by Adults With Traumatic Brain Injuries to Interpret Explicit and Inferential Information.

    PubMed

    Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E

    2016-01-01

    Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.

  12. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  13. Digital to analog conversion and visual evaluation of Thematic Mapper data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  14. Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  15. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    PubMed

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  16. What Geoscience Experts and Novices Look At, and What They See, When Viewing Data Visualizations

    ERIC Educational Resources Information Center

    Kastens, Kim A.; Shipley, Thomas F.; Boone, Alexander P.; Straccia, Frances

    2016-01-01

    This study examines how geoscience experts and novices make meaning from an iconic type of data visualization: shaded relief images of bathymetry and topography. Participants examined, described, and interpreted a global image, two high-resolution seafloor images, and 2 high-resolution continental images, while having their gaze direction…

  17. Visualizing time: how linguistic metaphors are incorporated into displaying instruments in the process of interpreting time-varying signals

    NASA Astrophysics Data System (ADS)

    Garcia-Belmonte, Germà

    2017-06-01

    Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.

  18. Cognitive issues in searching images with visual queries

    NASA Astrophysics Data System (ADS)

    Yu, ByungGu; Evens, Martha W.

    1999-01-01

    In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.

  19. Vertical or horizontal orientation of foot radiographs does not affect image interpretation

    PubMed Central

    Ferran, Nicholas Antonio; Ball, Luke; Maffulli, Nicola

    2012-01-01

    Summary This study determined whether the orientation of dorsoplantar and oblique foot radiographs has an effect on radiograph interpretation. A test set of 50 consecutive foot radiographs were selected (25 with fractures, and 25 normal), and duplicated in the horizontal orientation. The images were randomly arranged, numbered 1 through 100, and analysed by six image interpreters. Vertical and horizontal area under the ROC curve, accuracy, sensitivity and specificity were calculated for each image interpreter. There was no significant difference in the area under the ROC curve, accuracy, sensitivity or specificity of image interpretation between images viewed in the vertical or horizontal orientation. While conventions for display of radiographs may help to improve the development of an efficient visual search strategy in trainees, and allow for standardisation of publication of radiographic images, variation from the convention in clinical practice does not appear to affect the sensitivity or specificity of image interpretation. PMID:23738310

  20. Cultural Interpretations of the Visual Meaning of Icons and Images Used in North American Web Design

    ERIC Educational Resources Information Center

    Knight, Eliot; Gunawardena, Charlotte N.; Aydin, Cengiz Hakan

    2009-01-01

    This study examines cross-cultural interpretations of icons and images drawn from US academic websites. Participants from Morocco, Sri Lanka, Turkey, and the USA responded to an online questionnaire containing 18 icons and images representing online functions and information types common on US academic websites. Participants supplied meanings for…

  1. Automated virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.

    1997-05-01

    Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.

  2. The Visual Journal as an Image Sphere: Interpreting Artworks with an Anamorphic Perspective

    ERIC Educational Resources Information Center

    Sinner, Anita

    2011-01-01

    During a 1-year study, the visual journal of a preservice teacher was explored as an image sphere, or "bildraum", in relation to teacher culture. Artworks created in the visual journal offered an anamorphic perspective on the materiality of teacher culture, tracing the lived experiences of a student of art in the process of becoming an art teacher…

  3. A comparative interregional analysis of selected data from LANDSAT-1 and EREP for the inventory and monitoring of natural ecosystems

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1975-01-01

    Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.

  4. Analysis of urban area land cover using SEASAT Synthetic Aperture Radar data

    NASA Technical Reports Server (NTRS)

    Henderson, F. M. (Principal Investigator)

    1980-01-01

    Digitally processed SEASAT synthetic aperture raar (SAR) imagery of the Denver, Colorado urban area was examined to explore the potential of SAR data for mapping urban land cover and the compatability of SAR derived land cover classes with the United States Geological Survey classification system. The imagery is examined at three different scales to determine the effect of image enlargement on accuracy and level of detail extractable. At each scale the value of employing a simplistic preprocessing smoothing algorithm to improve image interpretation is addressed. A visual interpretation approach and an automated machine/visual approach are employed to evaluate the feasibility of producing a semiautomated land cover classification from SAR data. Confusion matrices of omission and commission errors are employed to define classification accuracies for each interpretation approach and image scale.

  5. Enhancing the Teaching and Learning of Mathematical Visual Images

    ERIC Educational Resources Information Center

    Quinnell, Lorna

    2014-01-01

    The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…

  6. Seeing meaning in action: a bidirectional link between visual perspective and action identification level.

    PubMed

    Libby, Lisa K; Shaeffer, Eric M; Eibach, Richard P

    2009-11-01

    Actions do not have inherent meaning but rather can be interpreted in many ways. The interpretation a person adopts has important effects on a range of higher order cognitive processes. One dimension on which interpretations can vary is the extent to which actions are identified abstractly--in relation to broader goals, personal characteristics, or consequences--versus concretely, in terms of component processes. The present research investigated how visual perspective (own 1st-person vs. observer's 3rd-person) in action imagery is related to action identification level. A series of experiments measured and manipulated visual perspective in mental and photographic images to test the connection with action identification level. Results revealed a bidirectional causal relationship linking 3rd-person images and abstract action identifications. These findings highlight the functional role of visual imagery and have implications for understanding how perspective is involved in action perception at the social, cognitive, and neural levels. Copyright 2009 APA

  7. Some distinguishing characteristics of contour and texture phenomena in images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1992-01-01

    The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.

  8. Measuring the performance of visual to auditory information conversion.

    PubMed

    Tan, Shern Shiou; Maul, Tomás Henrique Bode; Mennie, Neil Russell

    2013-01-01

    Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID) and inter sound distance (ISD) whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  9. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  10. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  11. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  12. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  13. Visualization of volumetric seismic data

    NASA Astrophysics Data System (ADS)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  14. Multiple interpretations of a pair of images of a surface

    NASA Astrophysics Data System (ADS)

    Longuet-Higgins, H. C.

    1988-07-01

    It is known that, if two optical images of a visually textured surface, projected from finitely separated viewpoints, allow more than one three-dimensional interpretation, then the surface must be part of a quadric passing through the two viewpoints. It is here shown that this quadric is either a plane or a ruled surface of a type first considered by Maybank (1985) in a study of ambiguous optic flow fields. In the latter case, three is the maximum number of distinct interpretations that the two images can sustain.

  15. Sonification of optical coherence tomography data and images

    PubMed Central

    Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.

    2010-01-01

    Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846

  16. Visual Culture and Literacy Online: Image Galleries as Sites of Learning

    ERIC Educational Resources Information Center

    Carpenter, B. Stephen, II; Cifuentes, Lauren

    2011-01-01

    As new media emerge in the common culture, the authors recommend that art educators adopt those media to facilitate deep understanding of visual culture and literacy. They report here on applications of an online image gallery that helps users develop ways to interpret what they see and compose. Over the past few years the authors have…

  17. Hospitalists' ability to use hand-carried ultrasound for central venous pressure estimation after a brief training intervention: a pilot study.

    PubMed

    Martin, L David; Ziegelstein, Roy C; Howell, Eric E; Martire, Carol; Hellmann, David B; Hirsch, Glenn A

    2013-12-01

    Access to hand-carried ultrasound technology for noncardiologists has increased significantly, yet development and evaluation of training programs are limited. We studied a focused program to teach hospitalists image acquisition of inferior vena cava (IVC) diameter and IVC collapsibility index with interpretation of estimated central venous pressure (CVP). Ten hospitalists completed an online educational module prior to attending a 1-day in-person training session that included directly supervised IVC imaging on volunteer subjects. In addition to making quantitative assessments, hospitalists were also asked to visually assess whether the IVC collapsed more than 50% during rapid inspiration or a sniff maneuver. Skills in image acquisition and interpretation were assessed immediately after training on volunteer patients and prerecorded images, and again on volunteer patients at least 6 weeks later. Eight of 10 hospitalists acquired adequate IVC images and interpreted them correctly on 5 of the 5 volunteer subjects and interpreted all 10 prerecorded images correctly at the end of the 1-day training session. At 7.4 ± 0.7 weeks (range, 6.9-8.6 weeks) follow-up, 9 of 10 hospitalists accurately acquired and interpreted all IVC images in 5 of 5 volunteers. Hospitalists were also able to accurately determine whether the IVC collapsibility index was more than 50% by visual assessment in 180 of 198 attempts (91% of the time). After a brief training program, hospitalists acquired adequate skills to perform and interpret hand-carried ultrasound IVC images and retained these skills in the near term. Though calculation of the IVC collapsibility index is more accurate, coupling a qualitative assessment with the IVC maximum diameter measurement may be acceptable in aiding bedside estimation of CVP. © 2013 Society of Hospital Medicine.

  18. Sex differences in visual attention to erotic and non-erotic stimuli.

    PubMed

    Lykins, Amy D; Meana, Marta; Strauss, Gregory P

    2008-04-01

    It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.

  19. An Inquiry into the Nature of Uncle Joe's Representation and Meaning.

    ERIC Educational Resources Information Center

    Muffoletto, Robert

    2001-01-01

    Addresses a "critical" or "reflective" visual literacy. Situates visual representations and their interpretation (the construction of meaning) within a context that raises questions about benefit and power. Explores four main topics: the image as text; analysis and meaning construction; visual literacy as a liberatory practice;…

  20. Geologic mapping of the Bauru Group in Sao Paulo state by LANDSAT images. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Godoy, A. M.

    1983-01-01

    The occurrence of the Bauru Group in Sao Paulo State was studied, with emphasis on the western plateau. Regional geological mapping was carried out on a 1:250.000 scale with the help of MSS/LANDSAT images. The visual interpretation of images consisted basically of identifying different spectral characteristics of the geological units using channels 5 and 7. Complementary studies were made for treatment of data with an Interative Image (I-100) analyser in order to facilitate the extraction of information, particularly for areas where visual interpretation proved to be difficult. Regional characteristics provided by MSS/LANDSAT images, coupled with lithostratigraphic studies carried out in the areas of occurrence of Bauru Group sediments, enabled the homogenization of criteria for the subdivision of this group. A spatial distribution of the mapped units was obtained for the entire State of Sao Paulo and results were correlated with proposed stratigraphic divisions.

  1. Comparison of treadmill exercise stress cardiac MRI to stress echocardiography in healthy volunteers for adequacy of left ventricular endocardial wall visualization: A pilot study

    PubMed Central

    Thavendiranathan, Paaladinesh; Dickerson, Jennifer A.; Scandling, Debbie; Balasubramanian, Vijay; Pennell, Michael L.; Hinton, Alice; Raman, Subha V.; Simonetti, Orlando P.

    2013-01-01

    Purpose To compare exercise stress cardiac magnetic resonance (cardiac MR) to echocardiography in healthy volunteers with respect to adequacy of endocardial visualization and confidence of stress study interpretation. Materials and Methods 28 healthy volunteers (aged 28 ± 11 years, 15 males) underwent exercise stress echo and cardiac MR one week apart assigned randomly to one test first. Stress cardiac MR was performed using an MRI-compatible treadmill; stress echo was performed as per routine protocol. Cardiac MR and echo images were independently reviewed and scored for adequacy of endocardial visualization and confidence in interpretation of the stress study. Results Heart rate at the time of imaging was similar between the studies. Average time from cessation of exercise to start of imaging (21 vs. 31 seconds, p<0.001) and time to acquire stress images (20 vs. 51 seconds, p<0.001) was shorter for cardiac MR. The number of myocardial segments adequately visualized was significantly higher by cardiac MR at rest (99.8% versus 96.4%, p=0.002) and stress (99.8% versus 94.1%, p=0.001). The proportion of subjects in whom there was high confidence in the interpretation was higher for cardiac MR than echo (96% vs 60%, p=0.005). Conclusion Exercise stress cardiac MR to assess peak exercise wall motion is feasible and can be performed at least as rapidly as stress echo. PMID:24123562

  2. Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.

    PubMed

    Orbán, Levente L; Chartier, Sylvain

    2015-01-01

    Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.

  3. Is airport baggage inspection just another medical image?

    NASA Astrophysics Data System (ADS)

    Gale, Alastair G.; Mugglestone, Mark D.; Purdy, Kevin J.; McClumpha, A.

    2000-04-01

    A similar inspection situation to medical imaging appears to be that of the airport security screener who examines X-ray images of passenger baggage. There is, however, little research overlap between the two areas. Studies of observer performance in examining medical images have led to a conceptual model which has been used successfully to understand diagnostic errors and develop appropriate training strategies. The model stresses three processes of; visual search, detection of potential targets, and interpretation of these areas; with most errors being due to the latter two factors. An initial study is reported on baggage inspection, using several brief image presentations, to examine the applicability of such a medical model to this domain. The task selected was the identification of potential Improvised Explosive Devices (IEDs). Specifically investigated was the visual search behavior of inspectors. It was found that IEDs could be identified in a very brief image presentation, with increased presentation time this performance improved. Participants fixated on IEDs very early on and sometimes concentrated wholly on this part of the baggage display. When IEDs were missed this was mainly due to interpretative factors rather than visual search or IED detection. It is argued that the observer model can be applied successfully to this scenario.

  4. A neural marker of medical visual expertise: implications for training.

    PubMed

    Rourke, Liam; Cruikshank, Leanna C; Shapke, Larissa; Singhal, Anthony

    2016-12-01

    Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural marker of visual expertise-the enhanced N170 event-related potential-is apparent in the EEGs of physicians as they interpret diagnostic images. We conducted a controlled trial with 10 cardiologists and 9 pulmonologists. Each participant completed 520 trials of a standard visual processing task involving the rapid evaluation of EKGs and CXRs-indicating-lung-disease. Ostensibly, each participant is expert with one type of image and competent with the other. We collected behavioral data on the participants' expertise with EKGs and CXRs and electrophysiological data on the magnitude, latency, and scalp location of their N170 ERPs as they interpreted the two types of images. Cardiologists demonstrated significantly more expertise with EKGs than CXRs, and this was reflected in an increased amplitude of their N170 ERPs while reading EKGs compared to CXRs. Pulmonologists demonstrated equal expertise with both types of images, and this was reflected in equal N170 ERP amplitudes for EKGs and CXRs. The results suggest provisionally that visual expertise has a similar substrate in medical practice as it does in other domains that have been studied extensively. This provides support for applying a sophisticated body of literature to questions about training and assessment of visual expertise among physicians.

  5. Receiver-operating-characteristic analysis of an automated program for analyzing striatal uptake of 123I-ioflupane SPECT images: calibration using visual reads.

    PubMed

    Kuo, Phillip Hsin; Avery, Ryan; Krupinski, Elizabeth; Lei, Hong; Bauer, Adam; Sherman, Scott; McMillan, Natalie; Seibyl, John; Zubal, George

    2013-03-01

    A fully automated objective striatal analysis (OSA) program that quantitates dopamine transporter uptake in subjects with suspected Parkinson's disease was applied to images from clinical (123)I-ioflupane studies. The striatal binding ratios or alternatively the specific binding ratio (SBR) of the lowest putamen uptake was computed, and receiver-operating-characteristic (ROC) analysis was applied to 94 subjects to determine the best discriminator using this quantitative method. Ninety-four (123)I-ioflupane SPECT scans were analyzed from patients referred to our clinical imaging department and were reconstructed using the manufacturer-supplied reconstruction and filtering parameters for the radiotracer. Three trained readers conducted independent visual interpretations and reported each case as either normal or showing dopaminergic deficit (abnormal). The same images were analyzed using the OSA software, which locates the striatal and occipital structures and places regions of interest on the caudate and putamen. Additionally, the OSA places a region of interest on the occipital region that is used to calculate the background-subtracted SBR. The lower SBR of the 2 putamen regions was taken as the quantitative report. The 33 normal (bilateral comma-shaped striata) and 61 abnormal (unilateral or bilateral dopaminergic deficit) studies were analyzed to generate ROC curves. Twenty-nine of the scans were interpreted as normal and 59 as abnormal by all 3 readers. For 12 scans, the 3 readers did not unanimously agree in their interpretations (discordant). The ROC analysis, which used the visual-majority-consensus interpretation from the readers as the gold standard, yielded an area under the curve of 0.958 when using 1.08 as the threshold SBR for the lowest putamen. The sensitivity and specificity of the automated quantitative analysis were 95% and 89%, respectively. The OSA program delivers SBR quantitative values that have a high sensitivity and specificity, compared with visual interpretations by trained nuclear medicine readers. Such a program could be a helpful aid for readers not yet experienced with (123)I-ioflupane SPECT images and if further adapted and validated may be useful to assess disease progression during pharmaceutical testing of therapies.

  6. Visual Information Literacy: Reading a Documentary Photograph

    ERIC Educational Resources Information Center

    Abilock, Debbie

    2008-01-01

    Like a printed text, an architectural blueprint, a mathematical equation, or a musical score, a visual image is its own language. Visual literacy has three components: (1) learning; (2) thinking; and (3) communicating. A "literate" person is able to decipher the basic code and syntax, interpret the signs and symbols, correctly apply terms from an…

  7. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  8. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    PubMed

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  9. The power of contextual effects in forensic anthropology: a study of biasability in the visual interpretations of trauma analysis on skeletal remains.

    PubMed

    Nakhaeizadeh, Sherry; Hanson, Ian; Dozzi, Nathalie

    2014-09-01

    The potential for contextual information to bias assessments in the forensic sciences has been demonstrated, in several forensic disiplines. In this paper, biasability potential within forensic anthropology was examined by analyzing the effects of external manipulations on judgments and decision-making in visual trauma assessment. Three separate websites were created containing fourteen identical images. Participants were randomly assigned to one website. Each website provided different contextual information, to assess variation of interpretation of the same images between contexts. The results indicated a higher scoring of trauma identification responses for the Mass grave context. Furthermore, a significant biasing effect was detected in the interpretation of four images. Less experienced participants were more likely to indicate presence of trauma. This research demonstrates bias impact in forensic anthropological trauma assessments and highlights the importance of recognizing and limiting cognitive vulnerabilities that forensic anthropologists might bring to the analysis. © 2014 American Academy of Forensic Sciences.

  10. Quantitative Image Analysis Techniques with High-Speed Schlieren Photography

    NASA Technical Reports Server (NTRS)

    Pollard, Victoria J.; Herron, Andrew J.

    2017-01-01

    Optical flow visualization techniques such as schlieren and shadowgraph photography are essential to understanding fluid flow when interpreting acquired wind tunnel test data. Output of the standard implementations of these visualization techniques in test facilities are often limited only to qualitative interpretation of the resulting images. Although various quantitative optical techniques have been developed, these techniques often require special equipment or are focused on obtaining very precise and accurate data about the visualized flow. These systems are not practical in small, production wind tunnel test facilities. However, high-speed photography capability has become a common upgrade to many test facilities in order to better capture images of unsteady flow phenomena such as oscillating shocks and flow separation. This paper describes novel techniques utilized by the authors to analyze captured high-speed schlieren and shadowgraph imagery from wind tunnel testing for quantification of observed unsteady flow frequency content. Such techniques have applications in parametric geometry studies and in small facilities where more specialized equipment may not be available.

  11. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  12. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  13. Interpretation of remotely sensed data and its applications in oceanography

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Tanaka, K.; Inostroza, H. M.; Verdesio, J. J.

    1982-01-01

    The methodology of interpretation of remote sensing data and its oceanographic applications are described. The elements of image interpretation for different types of sensors are discussed. The sensors utilized are the multispectral scanner of LANDSAT, and the thermal infrared of NOAA and geostationary satellites. Visual and automatic data interpretation in studies of pollution, the Brazil current system, and upwelling along the southeastern Brazilian coast are compared.

  14. Eye-tracking AFROC study of the influence of experience and training on chest x-ray interpretation

    NASA Astrophysics Data System (ADS)

    Manning, David; Ethell, Susan C.; Crawford, Trevor

    2003-05-01

    Four observer groups with different levels of expertise were tested in an investigation into the comparative nature of expert performance. The radiological task was the detection and localization of significant pulmonary nodules in postero-anterior vies of the chest in adults. Three test banks of 40 images were used. The observer groups were 6 experienced radiographers prior to a six month training program in chest image interpretation, the same radiographers after their tr4aining program, and 6 fresher undergraduate radiography students. Eye tracking was carried out on all observers to demonstrate differences in visual activity and nodule detection performance was measured with an AFROC technique. Detection performances of the four groups showed the radiologists and radiographers after training were measurably superior at the task. The eye-tracking parameters saccadic length, number of fixations visual coverage and scrutiny timer per film were measured for all subjects and compared. The missed nodules fixated and not fixated were also determined for the radiologist group. Results have shown distinct stylistic differences in the visual scanning strategies between the experienced and inexperienced observers that we believe can be generalized into a description of characteristics of expert versus non-expert performance. The findings will be used in the educational program of image interpretation for non-radiology practitioners.

  15. Simulators for training in ultrasound guided procedures.

    PubMed

    Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle

    2013-06-01

    The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.

  16. Using Anatomic Magnetic Resonance Image Information to Enhance Visualization and Interpretation of Functional Images: A Comparison of Methods Applied to Clinical Arterial Spin Labeling Images

    PubMed Central

    Dai, Weiying; Soman, Salil; Hackney, David B.; Wong, Eric T.; Robson, Philip M.; Alsop, David C.

    2017-01-01

    Functional imaging provides hemodynamic and metabolic information and is increasingly being incorporated into clinical diagnostic and research studies. Typically functional images have reduced signal-to-noise ratio and spatial resolution compared to other non-functional cross sectional images obtained as part of a routine clinical protocol. We hypothesized that enhancing visualization and interpretation of functional images with anatomic information could provide preferable quality and superior diagnostic value. In this work, we implemented five methods (frequency addition, frequency multiplication, wavelet transform, non-subsampled contourlet transform and intensity-hue-saturation) and a newly proposed ShArpening by Local Similarity with Anatomic images (SALSA) method to enhance the visualization of functional images, while preserving the original functional contrast and quantitative signal intensity characteristics over larger spatial scales. Arterial spin labeling blood flow MR images of the brain were visualization enhanced using anatomic images with multiple contrasts. The algorithms were validated on a numerical phantom and their performance on images of brain tumor patients were assessed by quantitative metrics and neuroradiologist subjective ratings. The frequency multiplication method had the lowest residual error for preserving the original functional image contrast at larger spatial scales (55%–98% of the other methods with simulated data and 64%–86% with experimental data). It was also significantly more highly graded by the radiologists (p<0.005 for clear brain anatomy around the tumor). Compared to other methods, the SALSA provided 11%–133% higher similarity with ground truth images in the simulation and showed just slightly lower neuroradiologist grading score. Most of these monochrome methods do not require any prior knowledge about the functional and anatomic image characteristics, except the acquired resolution. Hence, automatic implementation on clinical images should be readily feasible. PMID:27723582

  17. Overview of machine vision methods in x-ray imaging and microtomography

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor

    2018-04-01

    Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.

  18. a Kml-Based Approach for Distributed Collaborative Interpretation of Remote Sensing Images in the Geo-Browser

    NASA Astrophysics Data System (ADS)

    Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.

    2012-07-01

    Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image

  19. Viewers' Interpretations of Associational Montage: The Influence of "Visual Literacy" and Educational Background.

    ERIC Educational Resources Information Center

    Messaris, Paul; Nielsen, Karen O.

    A study examined the influence of viewers' backgrounds on their interpretation of "associational montage" in television advertising (editing which seeks to imply an analogy between the product and a juxtaposed image possessing desirable qualities). Subjects, 32 television professionals from two urban television stations and 95 customers…

  20. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  1. Experimental evidence for improved neuroimaging interpretation using three-dimensional graphic models.

    PubMed

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.

  2. Remote sensing programs and courses in engineering and water resources

    NASA Technical Reports Server (NTRS)

    Kiefer, R. W.

    1981-01-01

    The content of typical basic and advanced remote sensing and image interpretation courses are described and typical remote sensing graduate programs of study in civil engineering and in interdisciplinary environmental remote sensing and water resources management programs are outlined. Ideally, graduate programs with an emphasis on remote sensing and image interpretation should be built around a core of five courses: (1) a basic course in fundamentals of remote sensing upon which the more specialized advanced remote sensing courses can build; (2) a course dealing with visual image interpretation; (3) a course dealing with quantitative (computer-based) image interpretation; (4) a basic photogrammetry course; and (5) a basic surveying course. These five courses comprise up to one-half of the course work required for the M.S. degree. The nature of other course work and thesis requirements vary greatly, depending on the department in which the degree is being awarded.

  3. Visualization of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander

    2007-04-01

    We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).

  4. A "Thinking Journey" to the Planets Using Scientific Visualization Technologies: Implications to Astronomy Education.

    ERIC Educational Resources Information Center

    Yair, Yoav; Schur, Yaron; Mintz, Rachel

    2003-01-01

    Presents a novel approach to teaching astronomy and planetary sciences centered on visual images and simulations of planetary objects. Focuses on the study of the moon and the planet Mars by means of observations, interpretation, and comparison to planet Earth. (Contains 22 references.) (Author/YDS)

  5. Enhancing the quality of thermographic diagnosis in medicine

    NASA Astrophysics Data System (ADS)

    Kuklitskaya, A. G.; Olefir, G. I.

    2005-12-01

    This paper discusses the possibilities of enhancing the quality of thermographic diagnosis in medicine by increasing the objectivity of the processes of recording, visualization, and interpretation of IR images (thermograms) of patients. A test program is proposed for the diagnosis of oncopathology of the mammary glands, involving standard conditions for recording thermograms, visualization of the IR image in several versions of the color palette and shades of grey, its interpretation in accordance with a rigorously specified algorithm that takes into account the temperature regime in the Zakharin-Head zone of the heart, and the drawing of a conclusion based on a statistical analysis of literature data and the results of a survey of more than 3000 patients of the Minsk City Clinical Oncological Dispensary.

  6. Classifying the Perceptual Interpretations of a Bistable Image Using EEG and Artificial Neural Networks

    PubMed Central

    Hramov, Alexander E.; Maksimenko, Vladimir A.; Pchelintseva, Svetlana V.; Runnova, Anastasiya E.; Grubov, Vadim V.; Musatov, Vyacheslav Yu.; Zhuravlev, Maksim O.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2017-01-01

    In order to classify different human brain states related to visual perception of ambiguous images, we use an artificial neural network (ANN) to analyze multichannel EEG. The classifier built on the basis of a multilayer perceptron achieves up to 95% accuracy in classifying EEG patterns corresponding to two different interpretations of the Necker cube. The important feature of our classifier is that trained on one subject it can be used for the classification of EEG traces of other subjects. This result suggests the existence of common features in the EEG structure associated with distinct interpretations of bistable objects. We firmly believe that the significance of our results is not limited to visual perception of the Necker cube images; the proposed experimental approach and developed computational technique based on ANN can also be applied to study and classify different brain states using neurophysiological data recordings. This may give new directions for future research in the field of cognitive and pathological brain activity, and for the development of brain-computer interfaces. PMID:29255403

  7. User Directed Tools for Exploiting Expert Knowledge in an Immersive Segmentation and Visualization Environment

    NASA Technical Reports Server (NTRS)

    Senger, Steven O.

    1998-01-01

    Volumetric data sets have become common in medicine and many sciences through technologies such as computed x-ray tomography (CT), magnetic resonance (MR), positron emission tomography (PET), confocal microscopy and 3D ultrasound. When presented with 2D images humans immediately and unconsciously begin a visual analysis of the scene. The viewer surveys the scene identifying significant landmarks and building an internal mental model of presented information. The identification of features is strongly influenced by the viewers expectations based upon their expert knowledge of what the image should contain. While not a conscious activity, the viewer makes a series of choices about how to interpret the scene. These choices occur in parallel with viewing the scene and effectively change the way the viewer sees the image. It is this interaction of viewing and choice which is the basis of many familiar visual illusions. This is especially important in the interpretation of medical images where it is the expert knowledge of the radiologist which interprets the image. For 3D data sets this interaction of view and choice is frustrated because choices must precede the visualization of the data set. It is not possible to visualize the data set with out making some initial choices which determine how the volume of data is presented to the eye. These choices include, view point orientation, region identification, color and opacity assignments. Further compounding the problem is the fact that these visualization choices are defined in terms of computer graphics as opposed to language of the experts knowledge. The long term goal of this project is to develop an environment where the user can interact with volumetric data sets using tools which promote the utilization of expert knowledge by incorporating visualization and choice into a tight computational loop. The tools will support activities involving the segmentation of structures, construction of surface meshes and local filtering of the data set. To conform to this environment tools should have several key attributes. First, they should be only rely on computations over a local neighborhood of the probe position. Second, they should operate iteratively over time converging towards a limit behavior. Third, they should adapt to user input modifying they operational parameters with time.

  8. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    PubMed

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  9. Category identification of changed land-use polygons in an integrated image processing/geographic information system

    NASA Technical Reports Server (NTRS)

    Westmoreland, Sally; Stow, Douglas A.

    1992-01-01

    A framework is proposed for analyzing ancillary data and developing procedures for incorporating ancillary data to aid interactive identification of land-use categories in land-use updates. The procedures were developed for use within an integrated image processsing/geographic information systems (GIS) that permits simultaneous display of digital image data with the vector land-use data to be updated. With such systems and procedures, automated techniques are integrated with visual-based manual interpretation to exploit the capabilities of both. The procedural framework developed was applied as part of a case study to update a portion of the land-use layer in a regional scale GIS. About 75 percent of the area in the study site that experienced a change in land use was correctly labeled into 19 categories using the combination of automated and visual interpretation procedures developed in the study.

  10. The effect of a chest imaging lecture on emergency department doctors' ability to interpret chest CT images: a randomized study.

    PubMed

    Keijzers, Gerben; Sithirasenan, Vasugi

    2012-02-01

    To assess the chest computed tomography (CT) imaging interpreting skills of emergency department (ED) doctors and to study the effect of a CT chest imaging interpretation lecture on these skills. Sixty doctors in two EDs were randomized, using computerized randomization, to either attend a chest CT interpretation lecture or not to attend this lecture. Within 2 weeks of the lecture, the participants completed a questionnaire on demographic variables, anatomical knowledge, and diagnostic interpretation of 10 chest CT studies. Outcome measures included anatomical knowledge score, diagnosis score, and the combined overall score, all expressed as a percentage of correctly answered questions (0-100). Data on 58 doctors were analyzed, of which 27 were randomized to attend the lecture. The CT interpretation lecture did not have an effect on anatomy knowledge scores (72.9 vs. 70.2%), diagnosis scores (71.2 vs. 69.2%), or overall scores (71.4 vs. 69.5%). Twenty-nine percent of doctors stated that they had a systematic approach to chest CT interpretation. Overall self-perceived competency for interpreting CT imaging (brain, chest, abdomen) was low (between 3.2 and 5.2 on a 10-point Visual Analogue Scale). A single chest CT interpretation lecture did not improve chest CT interpretation by ED doctors. Less than one-third of doctors had a systematic approach to chest CT interpretation. A standardized systematic approach may improve interpretation skills.

  11. Real-world applications of artificial neural networks to cardiac monitoring using radar and recent theoretical developments

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Johnson, John L.; Vemuri, V. Rao

    1997-04-01

    This paper focuses on use of a new image filtering technique, Pulsed Coupled Neural Network factoring to enhance both the analysis and visual interpretation of noisy sinusoidal time signals, such as those produced by LLNL's Microwave Impulse Radar motion sensor. Separation of a slower, carrier wave from faster, finer detailed signals and from scattered noise is illustrated. The resulting images clearly illustrate the changes over time of simulated heart motion patterns. Such images can potentially assist a field medic in interpretation of the extent of combat injuries. These images can also be transmitted or stored and retrieved for later analysis.

  12. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    PubMed

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  13. Tongue reading: comparing the interpretation of visual information from inside the mouth, from electropalatographic and ultrasound displays of speech sounds.

    PubMed

    Cleland, Joanne; McCron, Caitlin; Scobbie, James M

    2013-04-01

    Speakers possess a natural capacity for lip reading; analogous to this, there may be an intuitive ability to "tongue-read." Although the ability of untrained participants to perceive aspects of the speech signal has been explored for some visual representations of the vocal tract (e.g. talking heads), it is not yet known to what extent there is a natural ability to interpret speech information presented through two clinical phonetic tools: EPG and ultrasound. This study aimed to determine whether there is any intuitive ability to interpret the images produced by these systems, and to determine whether one tool is more conducive to this than the other. Twenty adults viewed real-time and slow motion EPG and ultrasound silent movies of 10 different linguo-palatal consonants and 4 vowels. Participants selected which segment they perceived from four forced-choice options. Overall, participants scored above chance in the EPG and ultrasound conditions, suggesting that these images can be interpreted intuitively to some degree. This was the case for consonants in both the conditions and for vowels in the EPG condition.

  14. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  15. What's So Different about Visuals?

    ERIC Educational Resources Information Center

    Williams, Thomas R.

    1993-01-01

    Shows how visual images and text differ from one another in the extent to which they resemble their referents; kinds of concepts they evoke; precision with which they evoke them; kinds of structures they impose on the information they convey; and degree to which that information can be interpreted by perceptual as opposed to higher level cognitive…

  16. Spatial thermal radiometry contribution to the Massif Armoricain and the Massif Central (France) litho-structural study

    NASA Technical Reports Server (NTRS)

    Scanvic, J. Y. (Principal Investigator)

    1980-01-01

    Thermal zones delimited on HCMM images, by visual interpretation only, were correlated with geological units and carbonated rocks, granitic, and volcanic rocks were individualized. Rock signature is an evolutive parameter and some distinctions were made by addition of day, night and seasonal thermal image interpretation. This analysis also demonstrated that forest cover does not mask the underlying rocks thermal signature. Thermal anomalies were discovered. Geological targets were defined in the Paris Basin and the Montmarault granite.

  17. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  18. Developing Matlab scripts for image analysis and quality assessment

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.

    2011-11-01

    Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.

  19. Student Perceptions of Sectional CT/MRI Use in Teaching Veterinary Anatomy and the Correlation with Visual Spatial Ability: A Student Survey and Mental Rotations Test.

    PubMed

    Delisser, Peter J; Carwardine, Darren

    2017-11-29

    Diagnostic imaging technology is becoming more advanced and widely available to veterinary patients with the growing popularity of veterinary-specific computed tomography (CT) and magnetic resonance imaging (MRI). Veterinary students must, therefore, be familiar with these technologies and understand the importance of sound anatomic knowledge for interpretation of the resultant images. Anatomy teaching relies heavily on visual perception of structures and their function. In addition, visual spatial ability (VSA) positively correlates with anatomy test scores. We sought to assess the impact of including more diagnostic imaging, particularly CT/MRI, in the teaching of veterinary anatomy on the students' perceived level of usefulness and ease of understanding content. Finally, we investigated survey answers' relationship to the students' inherent baseline VSA, measured by a standard Mental Rotations Test. Students viewed diagnostic imaging as a useful inclusion that provided clear links to clinical relevance, thus improving the students' perceived benefits in its use. Use of CT and MRI images was not viewed as more beneficial, more relevant, or more useful than the use of radiographs. Furthermore, students felt that the usefulness of CT/MRI inclusion was mitigated by the lack of prior formal instruction on the basics of CT/MRI image generation and interpretation. To be of significantly greater use, addition of learning resources labeling relevant anatomy in tomographical images would improve utility of this novel teaching resource. The present study failed to find any correlation between student perceptions of diagnostic imaging in anatomy teaching and their VSA.

  20. Visualization and imaging methods for flames in microgravity

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1993-01-01

    The visualization and imaging of flames has long been acknowledged as the starting point for learning about and understanding combustion phenomena. It provides an essential overall picture of the time and length scales of processes and guides the application of other diagnostics. It is perhaps even more important in microgravity combustion studies, where it is often the only non-intrusive diagnostic measurement easily implemented. Imaging also aids in the interpretation of single-point measurements, such as temperature, provided by thermocouples, and velocity, by hot-wire anemometers. This paper outlines the efforts of the Microgravity Combustion Diagnostics staff at NASA Lewis Research Center in the area of visualization and imaging of flames, concentrating on methods applicable for reduced-gravity experimentation. Several techniques are under development: intensified array camera imaging, and two-dimensional temperature and species concentrations measurements. A brief summary of results in these areas is presented and future plans mentioned.

  1. Before your very eyes: the value and limitations of eye tracking in medical education.

    PubMed

    Kok, Ellen M; Jarodzka, Halszka

    2017-01-01

    Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  2. Top-down influence on the visual cortex of the blind during sensory substitution

    PubMed Central

    Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.

    2017-01-01

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776

  3. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  4. A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.

    PubMed

    Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T

    2007-09-01

    To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.

  5. Cardiac imaging: working towards fully-automated machine analysis & interpretation

    PubMed Central

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-01-01

    Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804

  6. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    NASA Technical Reports Server (NTRS)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  7. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  8. Genome image programs: visualization and interpretation of Escherichia coli microarray experiments.

    PubMed

    Zimmer, Daniel P; Paliy, Oleg; Thomas, Brian; Gyaneshwar, Prasad; Kustu, Sydney

    2004-08-01

    We have developed programs to facilitate analysis of microarray data in Escherichia coli. They fall into two categories: manipulation of microarray images and identification of known biological relationships among lists of genes. A program in the first category arranges spots from glass-slide DNA microarrays according to their position in the E. coli genome and displays them compactly in genome order. The resulting genome image is presented in a web browser with an image map that allows the user to identify genes in the reordered image. Another program in the first category aligns genome images from two or more experiments. These images assist in visualizing regions of the genome with common transcriptional control. Such regions include multigene operons and clusters of operons, which are easily identified as strings of adjacent, similarly colored spots. The images are also useful for assessing the overall quality of experiments. The second category of programs includes a database and a number of tools for displaying biological information about many E. coli genes simultaneously rather than one gene at a time, which facilitates identifying relationships among them. These programs have accelerated and enhanced our interpretation of results from E. coli DNA microarray experiments. Examples are given. Copyright 2004 Genetics Society of America

  9. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.

  10. GeoDash: Assisting Visual Image Interpretation in Collect Earth Online by Leveraging Big Data on Google Earth Engine

    NASA Technical Reports Server (NTRS)

    Markert, Kel; Ashmall, William; Johnson, Gary; Saah, David; Mollicone, Danilo; Diaz, Alfonso Sanchez-Paus; Anderson, Eric; Flores, Africa; Griffin, Robert

    2017-01-01

    Collect Earth Online (CEO) is a free and open online implementation of the FAO Collect Earth system for collaboratively collecting environmental data through the visual interpretation of Earth observation imagery. The primary collection mechanism in CEO is human interpretation of land surface characteristics in imagery served via Web Map Services (WMS). However, interpreters may not have enough contextual information to classify samples by only viewing the imagery served via WMS, be they high resolution or otherwise. To assist in the interpretation and collection processes in CEO, SERVIR, a joint NASA-USAID initiative that brings Earth observations to improve environmental decision making in developing countries, developed the GeoDash system, an embedded and critical component of CEO. GeoDash leverages Google Earth Engine (GEE) by allowing users to set up custom browser-based widgets that pull from GEE's massive public data catalog. These widgets can be quick looks of other satellite imagery, time series graphs of environmental variables, and statistics panels of the same. Users can customize widgets with any of GEE's image collections, such as the historical Landsat collection with data available since the 1970s, select date ranges, image stretch parameters, graph characteristics, and create custom layouts, all on-the-fly to support plot interpretation in CEO. This presentation focuses on the implementation and potential applications, including the back-end links to GEE and the user interface with custom widget building. GeoDash takes large data volumes and condenses them into meaningful, relevant information for interpreters. While designed initially with national and global forest resource assessments in mind, the system will complement disaster assessments, agriculture management, project monitoring and evaluation, and more.

  11. GeoDash: Assisting Visual Image Interpretation in Collect Earth Online by Leveraging Big Data on Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Markert, K. N.; Ashmall, W.; Johnson, G.; Saah, D. S.; Anderson, E.; Flores Cordova, A. I.; Díaz, A. S. P.; Mollicone, D.; Griffin, R.

    2017-12-01

    Collect Earth Online (CEO) is a free and open online implementation of the FAO Collect Earth system for collaboratively collecting environmental data through the visual interpretation of Earth observation imagery. The primary collection mechanism in CEO is human interpretation of land surface characteristics in imagery served via Web Map Services (WMS). However, interpreters may not have enough contextual information to classify samples by only viewing the imagery served via WMS, be they high resolution or otherwise. To assist in the interpretation and collection processes in CEO, SERVIR, a joint NASA-USAID initiative that brings Earth observations to improve environmental decision making in developing countries, developed the GeoDash system, an embedded and critical component of CEO. GeoDash leverages Google Earth Engine (GEE) by allowing users to set up custom browser-based widgets that pull from GEE's massive public data catalog. These widgets can be quick looks of other satellite imagery, time series graphs of environmental variables, and statistics panels of the same. Users can customize widgets with any of GEE's image collections, such as the historical Landsat collection with data available since the 1970s, select date ranges, image stretch parameters, graph characteristics, and create custom layouts, all on-the-fly to support plot interpretation in CEO. This presentation focuses on the implementation and potential applications, including the back-end links to GEE and the user interface with custom widget building. GeoDash takes large data volumes and condenses them into meaningful, relevant information for interpreters. While designed initially with national and global forest resource assessments in mind, the system will complement disaster assessments, agriculture management, project monitoring and evaluation, and more.

  12. Improving spatial perception in 5-yr.-old Spanish children.

    PubMed

    Jiménez, Andrés Canto; Sicilia, Antonio Oña; Vera, Juan Granda

    2007-06-01

    Assimilation of distance perception was studied in 70 Spanish primary school children. This assimilation involves the generation of projective images which are acquired through two mechanisms. One mechanism is spatial perception, wherein perceptual processes develop ensuring successful immersion in space and the acquisition of visual cues which a person may use to interpret images seen in the distance. The other mechanism is movement through space so that these images are produced. The present study evaluated the influence on improvements in spatial perception of using increasingly larger spaces for training sessions within a motor skills program. Visual parameters were measured in relation to the capture and tracking of moving objects or ocular motility and speed of detection or visual reaction time. Analysis showed that for the group trained in increasingly larger spaces, ocular motility and visual reaction time were significantly improved during. different phases of the program.

  13. Interaction techniques for radiology workstations: impact on users' productivity

    NASA Astrophysics Data System (ADS)

    Moise, Adrian; Atkins, M. Stella

    2004-04-01

    As radiologists progress from reading images presented on film to modern computer systems with images presented on high-resolution displays, many new problems arise. Although the digital medium has many advantages, the radiologist"s job becomes cluttered with many new tasks related to image manipulation. This paper presents our solution for supporting radiologists" interpretation of digital images by automating image presentation during sequential interpretation steps. Our method supports scenario based interpretation, which group data temporally, according to the mental paradigm of the physician. We extended current hanging protocols with support for "stages". A stage reflects the presentation of digital information required to complete a single step within a complex task. We demonstrated the benefits of staging in a user study with 20 lay subjects involved in a visual conjunctive search for targets, similar to a radiology task of identifying anatomical abnormalities. We designed a task and a set of stimuli which allowed us to simulate the interpretation workflow from a typical radiology scenario - reading a chest computed radiography exam when a prior study is also available. The simulation was possible by abstracting the radiologist"s task and the basic workstation navigation functionality. We introduced "Stages," an interaction technique attuned to the radiologist"s interpretation task. Compared to the traditional user interface, Stages generated a 14% reduction in the average interpretation.

  14. "Relative CIR": an image enhancement and visualization technique

    USGS Publications Warehouse

    Fleming, Michael D.

    1993-01-01

    Many techniques exist to spectrally and spatially enhance digital multispectral scanner data. One technique enhances an image while keeping the colors as they would appear in a color-infrared (CIR) image. This "relative CIR" technique generates an image that is both spectrally and spatially enhanced, while displaying a maximum range of colors. The technique enables an interpreter to visualize either spectral or land cover classes by their relative CIR characteristics. A relative CIR image is generated by developed spectral statistics for each class in the classifications and then, using a nonparametric approach for spectral enhancement, the means of the classes for each band are ranked. A 3 by 3 pixel smoothing filter is applied to the classification for spatial enhancement and the classes are mapped to the representative rank for each band. Practical applications of the technique include displaying an image classification product as a CIR image that was not derived directly from a spectral image, visualizing how a land cover classification would look as a CIR image, and displaying a spectral classification or intermediate product that will be used to label spectral classes.

  15. Teachers' Interpretations of Texts-Image Juxtapositions in Textbooks: From the Concrete to the Abstract

    ERIC Educational Resources Information Center

    Eilam, Billie; Poyas, Yael

    2012-01-01

    The paper examined expert literature teachers' coping with a novel textbook, integrating literature with visual arts, which is a particular interdisciplinary case of text-image relations in textbooks. Examination was performed within the framework of teachers' responses to curricular changes and of theory regarding strategies of interdisciplinary…

  16. Analysis of the characteristics appearing in LANDSAT multispectral images in the geological structural mapping of the midwestern portion of the Rio Grande do Sul shield. M.S. Thesis - 25 Mar. 1982; [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Ohara, T.

    1982-01-01

    The central-western part of Rio Grande do Sul Shield was geologically mapped to test the use of MSS-LANDSAT data in the study of mineralized regions. Visual interpretation of the images a the scale of 1:500,000 consisted, in the identification and analysis of the different tonal and textural patterns in each spectral band. After the structural geologic mapping of the area, using visual interpretation techniques, the statistical data obtained were evaluated, specially data concerning size and direction of fractures. The IMAGE-100 system was used to enlarge and enhance certain imagery. The LANDSAT MSS data offer several advantages over conventional white and black aerial photographs for geological studies. Its multispectral characteristic (band 6 and false color composition of bands 4, 5 and 7 were best suitable for the study). Coverage of a large imaging area of about 35,000 sq km, giving a synoptical view, is very useful for perceiving the regional geological setting.

  17. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    PubMed

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, T.; McClain, C.J.; Shafer, R.B.

    Ten patients were prospectively studied using 99mTc-PIPIDA imaging to evaluate the effects of fasting and parenteral alimentation on gallbladder function. Three of ten patients had initial nonvisualization of the gallbladder for up to 2 hr, yet had normal visualization on repeat imaging performed after resumption of oral intake or after parenteral alimentation was discontinued. 99mTc-PIPIDA imaging should be interpreted with caution in patients fitting into either of these groups.

  19. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  20. Is Fourier analysis performed by the visual system or by the visual investigator.

    PubMed

    Ochs, A L

    1979-01-01

    A numerical Fourier transform was made of the pincushion grid illusion and the spectral components orthogonal to the illusory lines were isolated. Their inverse transform creates a picture of the illusion. The spatial-frequency response of cortical, simple receptive field neurons similarly filters the grid. A complete set of these neurons thus approximates a two-dimensional Fourier analyzer. One cannot conclude, however, that the brain actually uses frequency-domain information to interpret visual images.

  1. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.

  2. Right hemisphere performance and competence in processing mental images, in a case of partial interhemispheric disconnection.

    PubMed

    Blanc-Garin, J; Faure, S; Sabio, P

    1993-05-01

    The objective of this study was to analyze dynamic aspects of right hemisphere implementation in processing visual images. Two tachistoscopic, divided visual field experiments were carried out on a partial split-brain patient with no damage to the right hemisphere. In the first experiment, image generation performance for letters presented in the right visual field (/left hemisphere) was undeniably optimal. In the left visual field (/right hemisphere), performance was no better than chance level at first, but then improved dramatically across stimulation blocks, in each of five successive sessions. This was interpreted as revealing the progressive spontaneous activation of the right hemisphere's competence not shown initially. The aim of the second experiment was to determine some conditions under which this pattern was obtained. The experimental design contrasted stimuli (words and pictures) and representational activity (phonologic and visuo-imaged processing). The right visual field (/left hemisphere: LH) elicited higher performance than the left visual field (/right hemisphere, RH) in the three situations where verbal activity was required. No superiority could be found when visual images were to be generated from pictures: parallel and weak improvement of both hemispheres was observed across sessions. Two other patterns were obtained: improvement in RH performance (although LH performance remained superior) and an unexpectedly large decrease in RH performance. These data are discussed in terms of RH cognitive competence and hemisphere implementation.

  3. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  4. Top-down influence on the visual cortex of the blind during sensory substitution.

    PubMed

    Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C

    2016-01-15

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Advanced Image Processing for Defect Visualization in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1997-01-01

    Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.

  6. Mountain building processes in the Central Andes

    NASA Technical Reports Server (NTRS)

    Bloom, A. L.; Isacks, B. L.

    1986-01-01

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  7. Mountain building processes in the Central Andes

    NASA Astrophysics Data System (ADS)

    Bloom, A. L.; Isacks, B. L.

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  8. The challenges of studying visual expertise in medical image diagnosis.

    PubMed

    Gegenfurtner, Andreas; Kok, Ellen; van Geel, Koos; de Bruin, Anique; Jarodzka, Halszka; Szulewski, Adam; van Merriënboer, Jeroen Jg

    2017-01-01

    Visual expertise is the superior visual skill shown when executing domain-specific visual tasks. Understanding visual expertise is important in order to understand how the interpretation of medical images may be best learned and taught. In the context of this article, we focus on the visual skill of medical image diagnosis and, more specifically, on the methodological set-ups routinely used in visual expertise research. We offer a critique of commonly used methods and propose three challenges for future research to open up new avenues for studying characteristics of visual expertise in medical image diagnosis. The first challenge addresses theory development. Novel prospects in modelling visual expertise can emerge when we reflect on cognitive and socio-cultural epistemologies in visual expertise research, when we engage in statistical validations of existing theoretical assumptions and when we include social and socio-cultural processes in expertise development. The second challenge addresses the recording and analysis of longitudinal data. If we assume that the development of expertise is a long-term phenomenon, then it follows that future research can engage in advanced statistical modelling of longitudinal expertise data that extends the routine use of cross-sectional material through, for example, animations and dynamic visualisations of developmental data. The third challenge addresses the combination of methods. Alternatives to current practices can integrate qualitative and quantitative approaches in mixed-method designs, embrace relevant yet underused data sources and understand the need for multidisciplinary research teams. Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  9. Interpretation of forest characteristics from computer-generated images.

    Treesearch

    T.M. Barrett; H.R. Zuuring; T. Christopher

    2006-01-01

    The need for effective communication in the management and planning of forested landscapes has led to a substantial increase in the use of visual information. Using forest plots from California, Oregon, and Washington, and a survey of 183 natural resource professionals in these states, we examined the use of computer-generated images to convey information about forest...

  10. Image understanding in terms of semiotics

    NASA Astrophysics Data System (ADS)

    Zakharko, E.; Kaminsky, Roman M.; Shpytko, V.

    1995-06-01

    Human perception of pictorial visual information is investigated from iconical sign view-point and appropriate semiotical model is discussed. Image construction (syntactics) is analyzed as a complex hierarchical system and various types of pictorial objects, their relations, regular configurations are represented, studied, and modeled. Relations between image syntactics, its semantics, and pragmatics is investigated. Research results application to the problems of thematic interpretation of Earth surface remote imgages is illustrated.

  11. Interpreting intracorporeal landscapes: how patients visualize pathophysiology and utilize medical images in their understanding of chronic musculoskeletal illness.

    PubMed

    Moore, Andrew J; Richardson, Jane C; Bernard, Miriam; Sim, Julius

    2018-02-26

    Medical science and other sources, such as the media, increasingly inform the general public's understanding of disease. There is often discordance between this understanding and the diagnostic interpretations of health care practitioners (HCPs). In this paper - based on a supra-analysis of qualitative interview data from two studies of joint pain, including osteoarthritis - we investigate how people imagine and make sense of the pathophysiology of their illness, and how these understandings may affect self-management behavior. We then explore how HCPs' use of medical images and models can inform patients' understanding. In conceptualizing their illness to make sense of their experience of the disease, individuals often used visualizations of their inner body; these images may arise from their own lay understanding, or may be based on images provided by HCPs. When HCPs used anatomical models or medical images judiciously, patients' orientation to their illness changed. Including patients in a more collaborative diagnostic event that uses medical images and visual models to support explanations about their condition may help them to achieve a more meaningful understanding of their illness and to manage their condition more effectively. Implications for Rehabilitation Chronic musculoskeletal pain is a leading cause of pain and years lived with disability, and despite its being common, patients and healthcare professionals often have a different understanding of the underlying disease. An individual's understanding of his or her pathophysiology plays an important role in making sense of painful joint conditions and in decision-making about self-management and care. Including patients in a more collaborative diagnostic event using medical images and anatomical models to support explanations about their symptoms may help them to better understand their condition and manage it more effectively. Using visually informed explanations and anatomical models may also help to reassure patients about the safety and effectiveness of core treatments such as physical exercise and thereby help restore or improve patients' activity levels and return to social participation.

  12. An asymmetrical relationship between verbal and visual thinking: converging evidence from behavior and fMRI

    PubMed Central

    Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina

    2017-01-01

    Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. PMID:28323162

  13. An asymmetrical relationship between verbal and visual thinking: Converging evidence from behavior and fMRI.

    PubMed

    Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina

    2017-05-15

    Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.

  15. Learning invariance from natural images inspired by observations in the primary visual cortex.

    PubMed

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  16. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  17. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  18. Visualizing Discipline of the Body in a German Open-Air School (1923-1939): Retrospection and Introspection

    ERIC Educational Resources Information Center

    Thyssen, Geert

    2007-01-01

    This article considers how historians might use imagery in the context of an open-air school in Germany, Senne I-Bielefeld (1922-1939). In considering the "nature" of such images, issues and problems associated with their interpretation are illuminated and discussed. First, two images selected from the pre-Nazi period of the school are…

  19. Monte Carlo simulations of medical imaging modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, G.P.

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less

  20. Paintings, photographs, and computer graphics are calculated appearances

    NASA Astrophysics Data System (ADS)

    McCann, John

    2012-03-01

    Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.

  1. Comparison of magnetic resonance imaging sequences for depicting the subthalamic nucleus for deep brain stimulation.

    PubMed

    Nagahama, Hiroshi; Suzuki, Kengo; Shonai, Takaharu; Aratani, Kazuki; Sakurai, Yuuki; Nakamura, Manami; Sakata, Motomichi

    2015-01-01

    Electrodes are surgically implanted into the subthalamic nucleus (STN) of Parkinson's disease patients to provide deep brain stimulation. For ensuring correct positioning, the anatomic location of the STN must be determined preoperatively. Magnetic resonance imaging has been used for pinpointing the location of the STN. To identify the optimal imaging sequence for identifying the STN, we compared images produced with T2 star-weighted angiography (SWAN), gradient echo T2*-weighted imaging, and fast spin echo T2-weighted imaging in 6 healthy volunteers. Our comparison involved measurement of the contrast-to-noise ratio (CNR) for the STN and substantia nigra and a radiologist's interpretations of the images. Of the sequences examined, the CNR and qualitative scores were significantly higher on SWAN images than on other images (p < 0.01) for STN visualization. Kappa value (0.74) on SWAN images was the highest in three sequences for visualizing the STN. SWAN is the sequence best suited for identifying the STN at the present time.

  2. Comparing object recognition from binary and bipolar edge images for visual prostheses.

    PubMed

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  3. Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues

    NASA Astrophysics Data System (ADS)

    Lazaridou, M. A.; Karagianni, A. Ch.

    2016-06-01

    Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.

  4. Microreact: visualizing and sharing data for genomic epidemiology and phylogeography

    PubMed Central

    Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.

    2016-01-01

    Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833

  5. Mapping landscape corridors

    Treesearch

    Peter Vogt; Kurt H. Riitters; Marcin Iwanowski; Christine Estreguil; Jacek Kozak; Pierre Soille

    2007-01-01

    Corridors are important geographic features for biological conservation and biodiversity assessment. The identification and mapping of corridors is usually based on visual interpretations of movement patterns (functional corridors) or habitat maps (structural corridors). We present a method for automated corridor mapping with morphological image processing, and...

  6. LIME: 3D visualisation and interpretation of virtual geoscience models

    NASA Astrophysics Data System (ADS)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.

  7. What we see when we digitize pain: The risk of valorizing image-based representations of fibromyalgia over body and bodily experience

    PubMed Central

    Manivannan, Vyshali

    2017-01-01

    Fibromyalgia is chronic pain of unknown etiology, attended by fatigue and affective dysfunction. Unapparent to the unpracticed eye or diagnostic image, it is denied the status of “real” suffering given to visually confirmable disorders. It is my customary mode of existence: a contingent landscape of swinging bridges that may or may not give way, everything a potential threat or deprivation. I don’t express it within the framework of acute pain, but I am evaluated by traditional biomedical standards anyway. Ultimately, the diagnostic image of pain, and the medical and academic discourse used to interpret it, determines my functionality. Such a stance dismisses bodily senses and alternate ways of knowing in pursuit of the ocularcentric objectivity promised by digital health technologies, whose vision remains chained to the interpretive, discursive strategies of human operators and interpreters. A new poetics of pain is critical not only for rewriting the dominant metaphors that construct and delimit our imaginings of pain but also for rewiring the use and reading of digital technologies, wherein the digital image becomes the new site of the hermeneutic exercise, even when the suffering body lies in plain view. This facilitates a failure to listen and touch in patient care, and the imposition of a narrative based on visual evidence, translated into sanitized language, at the cost of intercorporeality. If pain strips sufferers of a voice, my body and its affects should be allowed to speak. PMID:29942598

  8. Neural network and its application to CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  9. Longer-Term Investigation of the Value of 18F-FDG-PET and Magnetic Resonance Imaging for Predicting the Conversion of Mild Cognitive Impairment to Alzheimer's Disease: A Multicenter Study.

    PubMed

    Inui, Yoshitaka; Ito, Kengo; Kato, Takashi

    2017-01-01

    The value of fluorine-18-fluorodeoxyglucose positron emission tomography (18F-FDG-PET) and magnetic resonance imaging (MRI) for predicting conversion of mild cognitive impairment (MCI) to Alzheimer's disease (AD) in longer-term is unclear. To evaluate longer-term prediction of MCI to AD conversion using 18F-FDG-PET and MRI in a multicenter study. One-hundred and fourteen patients with MCI were followed for 5 years. They underwent clinical and neuropsychological examinations, 18F-FDG-PET, and MRI at baseline. PET images were visually classified into predefined dementia patterns. PET scores were calculated as a semi quantitative index. For structural MRI, z-scores in medial temporal area were calculated by automated volume-based morphometry (VBM). Overall, 72% patients with amnestic MCI progressed to AD during the 5-year follow-up. The diagnostic accuracy of PET scores over 5 years was 60% with 53% sensitivity and 84% specificity. Visual interpretation of PET images predicted conversion to AD with an overall 82% diagnostic accuracy, 94% sensitivity, and 53% specificity. The accuracy of VBM analysis presented little fluctuation through 5 years and it was highest (73%) at the 5-year follow-up, with 79% sensitivity and 63% specificity. The best performance (87.9% diagnostic accuracy, 89.8% sensitivity, and 82.4% specificity) was with a combination identified using multivariate logistic regression analysis that included PET visual interpretation, educational level, and neuropsychological tests as predictors. 18F-FDG-PET visual assessment showed high performance for predicting conversion to AD from MCI, particularly in combination with neuropsychological tests. PET scores showed high diagnostic specificity. Structural MRI focused on the medial temporal area showed stable predictive value throughout the 5-year course.

  10. Making the invisible body visible. Bone scans, osteoporosis and women's bodily experiences.

    PubMed

    Reventlow, Susanne Dalsgaard; Hvas, Lotte; Malterud, Kirsti

    2006-06-01

    The imaging technology of bone scans allows visualization of the bone structure, and determination of a numerical value. Both these are subjected to professional interpretation according to medical (epidemiological) evidence to estimate the individual's risk of fractures. But when bodily experience is challenged by a visual diagnosis, what effect does this have on an individual? The aim of this study was to explore women's bodily experiences after a bone scan and to analyse how the scan affects women's self-awareness, sense of bodily identity and integrity. We interviewed 16 Danish women (aged 61-63) who had had a bone scan for osteoporosis. The analysis was based on Merleau-Ponty's perspective of perception as an embodied experience in which bodily experience is understood to be the existential ground of culture and self. Women appeared to take the scan literally and planned their lives accordingly. They appeared to believe that the 'pictures' revealed some truth in themselves. The information supplied by the scan fostered a new body image. The women interpreted the scan result (a mark on a curve) to mean bodily fragility which they incorporated into their bodily perception. The embodiment of this new body image produced new symptom interpretations and preventive actions, including caution. The result of the bone scan and its cultural interpretation triggered a reconstruction of the body self as weak with reduced capacity. Women's interpretation of the bone scan reorganized their lived space and time, and their relations with others and themselves. Technological information about osteoporosis appeared to leave most affected women more uncertain and restricted rather than empowered. The findings raise some fundamental questions concerning the use of medical technology for the prevention of asymptomatic disorders.

  11. Visualising uncertainty: interpreting quantified geoscientific inversion outputs for a diverse user community.

    NASA Astrophysics Data System (ADS)

    Reading, A. M.; Morse, P. E.; Staal, T.

    2017-12-01

    Geoscientific inversion outputs, such as seismic tomography contour images, are finding increasing use amongst scientific user communities that have limited knowledge of the impact of output parameter uncertainty on subsequent interpretations made from such images. We make use of a newly written computer application which enables seismic tomography images to be displayed in a performant 3D graphics environment. This facilitates the mapping of colour scales to the human visual sensorium for the interactive interpretation of contoured inversion results incorporating parameter uncertainty. Two case examples of seismic tomography inversions or contoured compilations are compared from the southern hemisphere continents of Australia and Antarctica. The Australian example is based on the AuSREM contoured seismic wavespeed model while the Antarctic example is a valuable but less well constrained result. Through adjusting the multiple colour gradients, layer separations, opacity, illumination, shadowing and background effects, we can optimise the insights obtained from the 3D structure in the inversion compilation or result. Importantly, we can also limit the display to show information in a way that is mapped to the uncertainty in the 3D result. Through this practical application, we demonstrate that the uncertainty in the result can be handled through a well-posed mapping of the parameter values to displayed colours in the knowledge of what is perceived visually by a typical human. We found that this approach maximises the chance of a useful tectonic interpretation by a diverse scientific user community. In general, we develop the idea that quantified inversion uncertainty can be used to tailor the way that the output is presented to the analyst for scientific interpretation.

  12. Magnetic resonance imaging evaluation after implantation of a titanium cervical disc prosthesis: a comparison of 1.5 and 3 Tesla magnet strength.

    PubMed

    Sundseth, Jarle; Jacobsen, Eva A; Kolstad, Frode; Nygaard, Oystein P; Zwart, John A; Hol, Per K

    2013-10-01

    Cervical disc prostheses induce significant amount of artifact in magnetic resonance imaging which may complicate radiologic follow-up after surgery. The purpose of this study was to investigate as to what extent the artifact, induced by the frequently used Discover(®) cervical disc prosthesis, impedes interpretation of the MR images at operated and adjacent levels in 1.5 and 3 Tesla MR. Ten subsequent patients were investigated in both 1.5 and 3 Tesla MR with standard image sequences one year following anterior cervical discectomy with arthroplasty. Two neuroradiologists evaluated the images by consensus. Emphasis was made on signal changes in medulla at all levels and visualization of root canals at operated and adjacent levels. A "blur artifact ratio" was calculated and defined as the height of the artifact on T1 sagittal images related to the operated level. The artifacts induced in 1.5 and 3 Tesla MR were of entirely different character and evaluation of the spinal cord at operated level was impossible in both magnets. Artifacts also made the root canals difficult to assess at operated level and more pronounced in the 3 Tesla MR. At the adjacent levels however, the spinal cord and root canals were completely visualized in all patients. The "blur artifact" induced at operated level was also more pronounced in the 3 Tesla MR. The artifact induced by the Discover(®) titanium disc prosthesis in both 1.5 and 3 Tesla MR, makes interpretation of the spinal cord impossible and visualization of the root canals difficult at operated level. Adjusting the MR sequences to produce the least amount of artifact is important.

  13. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  14. Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.

    PubMed

    Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan

    2018-05-23

    Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.

  15. The magnifying glass - A feature space local expansion for visual analysis. [and image enhancement

    NASA Technical Reports Server (NTRS)

    Juday, R. D.

    1981-01-01

    The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.

  16. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  17. General Approach for Rock Classification Based on Digital Image Analysis of Electrical Borehole Wall Images

    NASA Astrophysics Data System (ADS)

    Linek, M.; Jungmann, M.; Berlage, T.; Clauser, C.

    2005-12-01

    Within the Ocean Drilling Program (ODP), image logging tools have been routinely deployed such as the Formation MicroScanner (FMS) or the Resistivity-At-Bit (RAB) tools. Both logging methods are based on resistivity measurements at the borehole wall and therefore are sensitive to conductivity contrasts, which are mapped in color scale images. These images are commonly used to study the structure of the sedimentary rocks and the oceanic crust (petrologic fabric, fractures, veins, etc.). So far, mapping of lithology from electrical images is purely based on visual inspection and subjective interpretation. We apply digital image analysis on electrical borehole wall images in order to develop a method, which augments objective rock identification. We focus on supervised textural pattern recognition which studies the spatial gray level distribution with respect to certain rock types. FMS image intervals of rock classes known from core data are taken in order to train textural characteristics for each class. A so-called gray level co-occurrence matrix is computed by counting the occurrence of a pair of gray levels that are a certain distant apart. Once the matrix for an image interval is computed, we calculate the image contrast, homogeneity, energy, and entropy. We assign characteristic textural features to different rock types by reducing the image information into a small set of descriptive features. Once a discriminating set of texture features for each rock type is found, we are able to discriminate the entire FMS images regarding the trained rock type classification. A rock classification based on texture features enables quantitative lithology mapping and is characterized by a high repeatability, in contrast to a purely visual subjective image interpretation. We show examples for the rock classification between breccias, pillows, massive units, and horizontally bedded tuffs based on ODP image data.

  18. The blind student’s interpretation of two-dimensional shapes in geometry

    NASA Astrophysics Data System (ADS)

    Andriyani; Budayasa, I. K.; Juniati, D.

    2018-01-01

    The blind student’s interpretation of two-dimensional shapes represents the blind student’s mental image of two-dimensional shapes that they can’t visualize directly, which is related to illustration of the characteristics and number of edges and angles. The objective of this research is to identify the blind student’s interpretation of two-dimensional shapes. This research was an exploratory study with qualitative approach. A subject of this research is a sixth-grade student who experiencing total blind from the fifth grade of elementary school. Researchers interviewed the subject about his interpretation of two-dimensional shapes according to his thinking.The findings of this study show the uniqueness of blind students, who have been totally blind since school age, in knowing and illustrating the characteristics of edges and angles of two-dimensional shapes by utilizing visual experiences that were previously obtained before the blind. The result can inspire teachers to design further learning for development of blind student geometry concepts.

  19. Addressing the coming radiology crisis-the Society for Computer Applications in Radiology transforming the radiological interpretation process (TRIP) initiative.

    PubMed

    Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot

    2004-12-01

    The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.

  20. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  1. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  2. Geometry planning and image registration in magnetic particle imaging using bimodal fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, F., E-mail: f.werner@uke.de; Hofmann, M.; Them, K.

    Purpose: Magnetic particle imaging (MPI) is a quantitative imaging modality that allows the distribution of superparamagnetic nanoparticles to be visualized. Compared to other imaging techniques like x-ray radiography, computed tomography (CT), and magnetic resonance imaging (MRI), MPI only provides a signal from the administered tracer, but no additional morphological information, which complicates geometry planning and the interpretation of MP images. The purpose of the authors’ study was to develop bimodal fiducial markers that can be visualized by MPI and MRI in order to create MP–MR fusion images. Methods: A certain arrangement of three bimodal fiducial markers was developed and usedmore » in a combined MRI/MPI phantom and also during in vivo experiments in order to investigate its suitability for geometry planning and image fusion. An algorithm for automated marker extraction in both MR and MP images and rigid registration was established. Results: The developed bimodal fiducial markers can be visualized by MRI and MPI and allow for geometry planning as well as automated registration and fusion of MR–MP images. Conclusions: To date, exact positioning of the object to be imaged within the field of view (FOV) and the assignment of reconstructed MPI signals to corresponding morphological regions has been difficult. The developed bimodal fiducial markers and the automated image registration algorithm help to overcome these difficulties.« less

  3. Physics and psychophysics of color reproduction

    NASA Astrophysics Data System (ADS)

    Giorgianni, Edward J.

    1991-08-01

    The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.

  4. Automated estimation of image quality for coronary computed tomographic angiography using machine learning.

    PubMed

    Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J

    2018-03-23

    Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.

  5. A ganglion-cell-based primary image representation method and its contribution to object recognition

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song

    2016-10-01

    A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.

  6. Picture Books Peek behind Cultural Curtains.

    ERIC Educational Resources Information Center

    Marantz, Sylvia; Marantz, Kenneth

    2000-01-01

    Discusses culture in picture books in three general categories: legends and histories; current life in particular areas; and the immigrant experience. Considers the translation of visual images, discusses authentic interpretations, and presents an annotated bibliography of picture books showing cultural diversity including African, Asian, Mexican,…

  7. The Aesthetics of Astrophysics: How to Make Appealing Color-composite Images that Convey the Science

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; Arcand, Kimberly K.; Watzke, Megan

    2017-05-01

    Astronomy has a rich tradition of using color photography and imaging, for visualization in research as well as for sharing scientific discoveries in formal and informal education settings (i.e., for “public outreach”). In the modern era, astronomical research has benefitted tremendously from electronic cameras that allow data and images to be generated and analyzed in a purely digital form with a level of precision that previously was not possible. Advances in image-processing software have also enabled color-composite images to be made in ways that are much more complex than with darkroom techniques, not only at optical wavelengths but across the electromagnetic spectrum. The Internet has made it possible to rapidly disseminate these images to eager audiences. Alongside these technological advances, there have been gains in understanding how to make images that are scientifically illustrative as well as aesthetically pleasing. Studies have also given insights on how the public interprets astronomical images and how that can be different than professional astronomers. An understanding of these differences will help in the creation of images that are meaningful to both groups. In this invited review, we discuss the techniques behind making color-composite images as well as examine the factors one should consider when doing so, whether for data visualization or public consumption. We also provide a brief history of astronomical imaging with a focus on the origins of the "modern era" during which distribution of high-quality astronomical images to the public is a part of nearly every professional observatory's public outreach. We review relevant research into the expectations and misconceptions that often affect the public's interpretation of these images.

  8. BMC Ecology image competition: the winning images

    PubMed Central

    2013-01-01

    BMC Ecology announces the winning entries in its inaugural Ecology Image Competition, open to anyone affiliated with a research institute. The competition, which received more than 200 entries from international researchers at all career levels and a wide variety of scientific disciplines, was looking for striking visual interpretations of ecological processes. In this Editorial, our academic Section Editors and guest judge Dr Yan Wong explain what they found most appealing about their chosen winning entries, and highlight a few of the outstanding images that didn’t quite make it to the top prize. PMID:23517630

  9. BMC Ecology image competition: the winning images.

    PubMed

    Harold, Simon; Wong, Yan; Baguette, Michel; Bonsall, Michael B; Clobert, Jean; Royle, Nick J; Settele, Josef

    2013-03-22

    BMC Ecology announces the winning entries in its inaugural Ecology Image Competition, open to anyone affiliated with a research institute. The competition, which received more than 200 entries from international researchers at all career levels and a wide variety of scientific disciplines, was looking for striking visual interpretations of ecological processes. In this Editorial, our academic Section Editors and guest judge Dr Yan Wong explain what they found most appealing about their chosen winning entries, and highlight a few of the outstanding images that didn't quite make it to the top prize.

  10. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Visualization Case Study: Eyjafjallajökull Ash (Invited)

    NASA Astrophysics Data System (ADS)

    Simmon, R.

    2010-12-01

    Although data visualization is a powerful tool in Earth science, the resulting imagery is often complex and difficult to interpret for non-experts. Students, journalists, web site visitors, or museum attendees often have difficulty understanding some of the imagery scientists create, particularly false-color imagery and data-driven maps. Many visualizations are designed for data exploration or peer communication, and often follow discipline conventions or are constrained by software defaults. Different techniques are necessary for communication with a broad audience. Data visualization combines ideas from cognitive science, graphic design, and cartography, and applies them to the challenge of presenting data clearly. Visualizers at NASA's Earth Observatory web site (earthobservatory.nasa.gov) use these techniques to craft remote sensing imagery for interested but non-expert readers. Images range from natural-color satellite images and multivariate maps to illustrations of abstract concepts. I will use imagery of the eruption of Iceland's Eyjafjallajökull volcano as a case study, showing specific applications of general design techniques. By using color carefully (including contextual data), precisely aligning disparate data sets, and highlighting important features, we crafted an image that clearly conveys the complex vertical and horizontal distribution of airborne ash.

  12. A novel iris transillumination grading scale allowing flexible assessment with quantitative image analysis and visual matching.

    PubMed

    Wang, Chen; Brancusi, Flavia; Valivullah, Zaheer M; Anderson, Michael G; Cunningham, Denise; Hedberg-Buenz, Adam; Power, Bradley; Simeonov, Dimitre; Gahl, William A; Zein, Wadih M; Adams, David R; Brooks, Brian

    2018-01-01

    To develop a sensitive scale of iris transillumination suitable for clinical and research use, with the capability of either quantitative analysis or visual matching of images. Iris transillumination photographic images were used from 70 study subjects with ocular or oculocutaneous albinism. Subjects represented a broad range of ocular pigmentation. A subset of images was subjected to image analysis and ranking by both expert and nonexpert reviewers. Quantitative ordering of images was compared with ordering by visual inspection. Images were binned to establish an 8-point scale. Ranking consistency was evaluated using the Kendall rank correlation coefficient (Kendall's tau). Visual ranking results were assessed using Kendall's coefficient of concordance (Kendall's W) analysis. There was a high degree of correlation among the image analysis, expert-based and non-expert-based image rankings. Pairwise comparisons of the quantitative ranking with each reviewer generated an average Kendall's tau of 0.83 ± 0.04 (SD). Inter-rater correlation was also high with Kendall's W of 0.96, 0.95, and 0.95 for nonexpert, expert, and all reviewers, respectively. The current standard for assessing iris transillumination is expert assessment of clinical exam findings. We adapted an image-analysis technique to generate quantitative transillumination values. Quantitative ranking was shown to be highly similar to a ranking produced by both expert and nonexpert reviewers. This finding suggests that the image characteristics used to quantify iris transillumination do not require expert interpretation. Inter-rater rankings were also highly similar, suggesting that varied methods of transillumination ranking are robust in terms of producing reproducible results.

  13. Recognition Alters the Spatial Pattern of fMRI Activation in Early Retinotopic Cortex

    PubMed Central

    Vul, E.; Kanwisher, N.

    2010-01-01

    Early retinotopic cortex has traditionally been viewed as containing a veridical representation of the low-level properties of the image, not imbued by high-level interpretation and meaning. Yet several recent results indicate that neural representations in early retinotopic cortex reflect not just the sensory properties of the image, but also the perceived size and brightness of image regions. Here we used functional magnetic resonance imaging pattern analyses to ask whether the representation of an object in early retinotopic cortex changes when the object is recognized compared with when the same stimulus is presented but not recognized. Our data confirmed this hypothesis: the pattern of response in early retinotopic visual cortex to a two-tone “Mooney” image of an object was more similar to the response to the full grayscale photo version of the same image when observers knew what the two-tone image represented than when they did not. Further, in a second experiment, high-level interpretations actually overrode bottom-up stimulus information, such that the pattern of response in early retinotopic cortex to an identified two-tone image was more similar to the response to the photographic version of that stimulus than it was to the response to the identical two-tone image when it was not identified. Our findings are consistent with prior results indicating that perceived size and brightness affect representations in early retinotopic visual cortex and, further, show that even higher-level information—knowledge of object identity—also affects the representation of an object in early retinotopic cortex. PMID:20071627

  14. Comparing object recognition from binary and bipolar edge images for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2017-01-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition. PMID:28458481

  15. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  16. Discussing Picturebooks across Perceptual, Structural and Ideological Perspectives

    ERIC Educational Resources Information Center

    Youngs, Suzette; Serafini, Frank

    2013-01-01

    Classroom discussions of multimodal texts, in particular historical fiction picturebooks, offer an interpretive space where readers are positioned to construct meanings in transaction with the written language, visual images, and design elements created by authors, illustrators and publishers (Serafini & Ladd, 2008; Sipe, 1999). This study was…

  17. Quantification of heterogeneity observed in medical images.

    PubMed

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  18. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  19. Linking DICOM pixel data with radiology reports using automatic semantic annotation

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Kim, Woojin; Munasinghe, Indeera; Criminisi, Antonio; White, Steve; Siddiqui, Khan

    2012-02-01

    Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists' PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists' reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physician's advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.

  20. eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.

    PubMed

    Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura

    2018-03-01

    Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P < 10 -560 ) for FHR and 0.97 (P < 10 -560 ) for UC. On average, MSE value was 0.00 for both FHR and UC. No CTG feature was found significantly different when measured in eCTG vs. reference signals. eCTG procedure is a promising useful tool to accurately extract digital FHR and UC signals from digital CTG images. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  2. The role of the right hemisphere in form perception and visual gnosis organization.

    PubMed

    Belyi, B I

    1988-06-01

    Peculiarities of series of picture interpretations and Rorschach test results in patients with unilateral benign hemispheric tumours are discussed. It is concluded that visual perception in the right hemisphere has hierarchic structure, i.e., each successive area from the occipital lobe towards the frontal having a more complicated function. Visual engrams are distributed over the right hemisphere in a manner similar to the way the visual information is recorded in holographic systems. In any impairment of the right hemisphere a tendency towards whole but unclear vision arises. The preservation of lower levels of visual perception provides for clear vision only of small parts of the image. Thus, confabulatory phenomena arises, which are specific for right hemispheric lesions.

  3. Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.

    PubMed

    Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida

    2016-06-28

    During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.

  4. Informatics in radiology (infoRAD): multimedia extension of medical imaging resource center teaching files.

    PubMed

    Yang, Guo Liang; Aziz, Aamer; Narayanaswami, Banukumar; Anand, Ananthasubramaniam; Lim, C C Tchoyoson; Nowinski, Wieslaw Lucjan

    2005-01-01

    A new method has been developed for multimedia enhancement of electronic teaching files created by using the standard protocols and formats offered by the Medical Imaging Resource Center (MIRC) project of the Radiological Society of North America. The typical MIRC electronic teaching file consists of static pages only; with the new method, audio and visual content may be added to the MIRC electronic teaching file so that the entire image interpretation process can be recorded for teaching purposes. With an efficient system for encoding the audiovisual record of on-screen manipulation of radiologic images, the multimedia teaching files generated are small enough to be transmitted via the Internet with acceptable resolution. Students may respond with the addition of new audio and visual content and thereby participate in a discussion about a particular case. MIRC electronic teaching files with multimedia enhancement have the potential to augment the effectiveness of diagnostic radiology teaching. RSNA, 2005.

  5. Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.

    PubMed

    Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K

    2009-06-01

    The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.

  6. Application of MSS/LANDSAT images to the structural study of recent sedimentary areas: Campos Sedimentary Basin, Rio de Janeiro, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Barbosa, M. P.

    1983-01-01

    Visual and computer aided interpretation of MSS/LANDSAT data identified linear and circular features which represent the ""reflexes'' of the crystalline basement structures in the Cenozoic sediments of the emergent part of the Campos Sedimentary Basin.

  7. Quantitative Architectural Analysis: A New Approach to Cortical Mapping

    ERIC Educational Resources Information Center

    Schleicher, Axel; Morosan, Patricia; Amunts, Katrin; Zilles, Karl

    2009-01-01

    Results from functional imaging studies are often still interpreted using the classical architectonic brain maps of Brodmann and his successors. One obvious weakness in traditional, architectural mapping is the subjective nature of localizing borders between cortical areas by means of a purely visual, microscopical examination of histological…

  8. Lots of Fun, Not Much Work, and No Hassles: Marketing Images of Higher Education.

    ERIC Educational Resources Information Center

    Klassen, Michael L.

    2000-01-01

    Content analyzed the visual material of college viewbooks from top- and lower-ranked U.S. colleges and universities. Drawing on advertising message strategy, the results are interpreted in four parts: the "face" of the organization, the package, the promise, and the "Big Idea." (EV)

  9. Think Spatial: The Representation in Mental Rotation Is Nonvisual

    ERIC Educational Resources Information Center

    Liesefeld, Heinrich R.; Zimmer, Hubert D.

    2013-01-01

    For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information…

  10. A Pictorial Visualization of Normal Mode Vibrations of the Fullerene (C[subscript 60]) Molecule in Terms of Vibrations of a Hollow Sphere

    ERIC Educational Resources Information Center

    Dunn, Janette L.

    2010-01-01

    Understanding the normal mode vibrations of a molecule is important in the analysis of vibrational spectra. However, the complicated 3D motion of large molecules can be difficult to interpret. We show how images of normal modes of the fullerene molecule C[subscript 60] can be made easier to understand by superimposing them on images of the normal…

  11. Interactive displays in medical art

    NASA Technical Reports Server (NTRS)

    Mcconathy, Deirdre Alla; Doyle, Michael

    1989-01-01

    Medical illustration is a field of visual communication with a long history. Traditional medical illustrations are static, 2-D, printed images; highly realistic depictions of the gross morphology of anatomical structures. Today medicine requires the visualization of structures and processes that have never before been seen. Complex 3-D spatial relationships require interpretation from 2-D diagnostic imagery. Pictures that move in real time have become clinical and research tools for physicians. Medical illustrators are involved with the development of interactive visual displays for three different, but not discrete, functions: as educational materials, as clinical and research tools, and as data bases of standard imagery used to produce visuals. The production of interactive displays in the medical arts is examined.

  12. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.

  13. Criterion for Identifying Vortices in High-Pressure Flows

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Okong'o, Nora

    2007-01-01

    A study of four previously published computational criteria for identifying vortices in high-pressure flows has led to the selection of one of them as the best. This development can be expected to contribute to understanding of high-pressure flows, which occur in diverse settings, including diesel, gas turbine, and rocket engines and the atmospheres of Jupiter and other large gaseous planets. Information on the atmospheres of gaseous planets consists mainly of visual and thermal images of the flows over the planets. Also, validation of recently proposed computational models of high-pressure flows entails comparison with measurements, which are mainly of visual nature. Heretofore, the interpretation of images of high-pressure flows to identify vortices has been based on experience with low-pressure flows. However, high-pressure flows have features distinct from those of low-pressure flows, particularly in regions of high pressure gradient magnitude caused by dynamic turbulent effects and by thermodynamic mixing of chemical species. Therefore, interpretations based on low-pressure behavior may lead to misidentification of vortices and other flow structures in high-pressure flows. The study reported here was performed in recognition of the need for one or more quantitative criteria for identifying coherent flow structures - especially vortices - from previously generated flow-field data, to complement or supersede the determination of flow structures by visual inspection of instantaneous fields or flow animations. The focus in the study was on correlating visible images of flow features with various quantities computed from flow-field data.

  14. Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting.

    PubMed

    Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R

    2016-06-27

    Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.

  15. Arnheim's Gestalt theory of visual balance: Examining the compositional structure of art photographs and abstract images

    PubMed Central

    McManus, I C; Stöver, Katharina; Kim, Do

    2011-01-01

    In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the ‘gamma-ramp study’), or comparing adjacent croppings on or off an axis (the ‘spider-web study’), showed no support for the Arnheim–Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim–Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition. PMID:23145250

  16. Arnheim's Gestalt theory of visual balance: Examining the compositional structure of art photographs and abstract images.

    PubMed

    McManus, I C; Stöver, Katharina; Kim, Do

    2011-01-01

    In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the 'gamma-ramp study'), or comparing adjacent croppings on or off an axis (the 'spider-web study'), showed no support for the Arnheim-Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim-Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition.

  17. Mobile phone imaging and cloud-based analysis for standardized malaria detection and reporting

    NASA Astrophysics Data System (ADS)

    Scherr, Thomas F.; Gupta, Sparsh; Wright, David W.; Haselton, Frederick R.

    2016-06-01

    Rapid diagnostic tests (RDTs) have been widely deployed in low-resource settings. These tests are typically read by visual inspection, and accurate record keeping and data aggregation remains a substantial challenge. A successful malaria elimination campaign will require new strategies that maximize the sensitivity of RDTs, reduce user error, and integrate results reporting tools. In this report, an unmodified mobile phone was used to photograph RDTs, which were subsequently uploaded into a globally accessible database, REDCap, and then analyzed three ways: with an automated image processing program, visual inspection, and a commercial lateral flow reader. The mobile phone image processing detected 20.6 malaria parasites/microliter of blood, compared to the commercial lateral flow reader which detected 64.4 parasites/microliter. Experienced observers visually identified positive malaria cases at 12.5 parasites/microliter, but encountered reporting errors and false negatives. Visual interpretation by inexperienced users resulted in only an 80.2% true negative rate, with substantial disagreement in the lower parasitemia range. We have demonstrated that combining a globally accessible database, such as REDCap, with mobile phone based imaging of RDTs provides objective, secure, automated, data collection and result reporting. This simple combination of existing technologies would appear to be an attractive tool for malaria elimination campaigns.

  18. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  19. Developing visual images for communicating information aboutantiretroviral side effects to a low-literate population.

    PubMed

    Dowse, Ros; Ramela, Thato; Barford, Kirsty-Lee; Browne, Sara

    2010-09-01

    The side effects of antiretroviral (ARV) therapy are linked to altered quality of life and adherence. Poor adherence has also been associated with low health-literacy skills, with an uninformed patient more likely to make ARV-related decisions that compromise the efficacy of the treatment. Low literacy skills disempower patients in interactions with healthcare providers and preclude the use of existing written patient information materials, which are generally written at a high reading level. Visual images or pictograms used as a counselling tool or included in patient information leaflets have been shown to improve patients' knowledge, particularly in low-literate groups. The objective of this study was to design visuals or pictograms illustrating various ARV side effects and to evaluate them in a low-literate South African Xhosa population. Core images were generated either from a design workshop or from posed photos or images from textbooks. The research team worked closely with a graphic artist. Initial versions of the images were discussed and assessed in group discussions, and then modified and eventually evaluated quantitatively in individual interviews with 40 participants who each had a maximum of 10 years of schooling. The familiarity of the human body, its facial expressions, postures and actions contextualised the information and contributed to the participants' understanding. Visuals that were simple, had a clear central focus and reflected familiar body experiences (e.g. vomiting) were highly successful. The introduction of abstract elements (e.g. fever) and metaphorical images (e.g. nightmares) presented problems for interpretation, particularly to those with the lowest educational levels. We recommend that such visual images should be designed in collaboration with the target population and a graphic artist, taking cognisance of the audience's literacy skills and culture, and should employ a multistage iterative process of modification and evaluation.

  20. Automated reference-free detection of motion artifacts in magnetic resonance images.

    PubMed

    Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios

    2018-04-01

    Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.

  1. Visual effects of the first ladies’ Kebaya clothing on the image of Indonesian women’s appearances

    NASA Astrophysics Data System (ADS)

    Suciati

    2016-04-01

    The image of Indonesian women on international level is partly influenced by the appearance of the First Lady. The role and position of the First Lady is the representation of Indonesian women, because basically the First Lady, as the wife who accompanies the President (head of state), has a strong background of cultural grip, high intellectuality and good personality in her daily lifestyle, including in wearing clothes, and as an ambassador of culture and design. Fashion style of the First Lady always draws praise and criticism from the public. The purpose of this study is to reveal the visualization effects of Indonesian First Ladies’ kebaya clothing style in various state occasions on the image of Indonesian women’s appearances. This study is a qualitative research of visual data that emphasizes the discussion of Kebaya Clothing using semiological study (connotation and denotation meaning) that bring out self-image. The results showed that the style the First Ladies’ Kebaya clothing in every presidency period of their husbands had characteristics both in the style of clothing or hairstyle, indicating self-image. The conclusion of this study reveals that the First Ladies’ Kebaya Clothing (National Clothing) is interpreted as having implied messages because clothing can be observed visually. Implication was done on the construction of learning patterns of clothing, national fashion design and Nusantara ethnic clothing design.

  2. Morphometric information to reduce the semantic gap in the characterization of microscopic images of thyroid nodules.

    PubMed

    Macedo, Alessandra A; Pessotti, Hugo C; Almansa, Luciana F; Felipe, Joaquim C; Kimura, Edna T

    2016-07-01

    The analyses of several systems for medical-imaging processing typically support the extraction of image attributes, but do not comprise some information that characterizes images. For example, morphometry can be applied to find new information about the visual content of an image. The extension of information may result in knowledge. Subsequently, results of mappings can be applied to recognize exam patterns, thus improving the accuracy of image retrieval and allowing a better interpretation of exam results. Although successfully applied in breast lesion images, the morphometric approach is still poorly explored in thyroid lesions due to the high subjectivity thyroid examinations. This paper presents a theoretical-practical study, considering Computer Aided Diagnosis (CAD) and Morphometry, to reduce the semantic discontinuity between medical image features and human interpretation of image content. The proposed method aggregates the content of microscopic images characterized by morphometric information and other image attributes extracted by traditional object extraction algorithms. This method carries out segmentation, feature extraction, image labeling and classification. Morphometric analysis was included as an object extraction method in order to verify the improvement of its accuracy for automatic classification of microscopic images. To validate this proposal and verify the utility of morphometric information to characterize thyroid images, a CAD system was created to classify real thyroid image-exams into Papillary Cancer, Goiter and Non-Cancer. Results showed that morphometric information can improve the accuracy and precision of image retrieval and the interpretation of results in computer-aided diagnosis. For example, in the scenario where all the extractors are combined with the morphometric information, the CAD system had its best performance (70% of precision in Papillary cases). Results signalized a positive use of morphometric information from images to reduce semantic discontinuity between human interpretation and image characterization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering

    PubMed Central

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments. PMID:29599954

  4. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    PubMed

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  5. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  6. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  7. 18F-FDG PET/CT in detection of gynecomastia in patients with hepatocellular carcinoma.

    PubMed

    Wang, Hsin-Yi; Jeng, Long-Bin; Lin, Ming-Chia; Chao, Chih-Hao; Lin, Wan-Yu; Kao, Chia-Hung

    2013-01-01

    We retrospectively investigate the prevalence of gynecomastia as false-positive 2-[18F]fluoro-2-deoxy-d-glucose (18F-FDG) positron emission tomography (PET)/computed tomography (CT) imaging in patients with hepatocellular carcinoma (HCC). Among the 127 male HCC patients who underwent 18F-FDG PET/CT scan, the 18FDG uptakes at the bilateral breasts in 9 patients with gynecomastia were recorded as standard uptake value (SUVmax) and the visual interpretation in both early and delayed images. The mean early SUVmax was 1.58/1.57 (right/left breast) in nine gynecomastia patients. The three patients with early visual score of 3 had higher early SUVmaxs. Gynecomastia is a possible cause of false-positive uptake on 18F-FDG PET/CT images. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Predicting visual semantic descriptive terms from radiological image data: preliminary results with liver lesions in CT.

    PubMed

    Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher; Napel, Sandy; Rubin, Daniel

    2014-08-01

    We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.

  9. CAD system for automatic analysis of CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Hachaj, T.; Ogiela, M. R.

    2011-03-01

    In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.

  10. Watching film for the first time: how adult viewers interpret perceptual discontinuities in film.

    PubMed

    Schwan, Stephan; Ildirar, Sermin

    2010-07-01

    Although film, television, and video play an important role in modern societies, the extent to which the similarities of cinematographic images to natural, unmediated conditions of visual experience contribute to viewers' comprehension is largely an open question. To address this question, we compared 20 inexperienced adult viewers from southern Turkey with groups of medium- and high-experienced adult viewers from the same region. In individual sessions, each participant was shown a set of 14 film clips that included a number of perceptual discontinuities typical for film. The viewers' interpretations were recorded and analyzed. The findings show that it is not the similarity to conditions of natural perception but the presence of a familiar line of action that determines the comprehensibility of films for inexperienced viewers. In the absence of such a line of action, extended prior experience is required for appropriate interpretation of cinematographic images such as those we investigated in this study.

  11. Amplitude interpretation and visualization of three-dimensional reflection data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enachescu, M.E.

    1994-07-01

    Digital recording and processing of modern three-dimensional surveys allow for relative good preservation and correct spatial positioning of seismic reflection amplitude. A four-dimensional seismic reflection field matrix R (x,y,t,A), which can be computer visualized (i.e., real-time interactively rendered, edited, and animated), is now available to the interpreter. The amplitude contains encoded geological information indirectly related to lithologies and reservoir properties. The magnitude of the amplitude depends not only on the acoustic impedance contrast across a boundary, but is also strongly affected by the shape of the reflective boundary. This allows the interpreter to image subtle tectonic and structural elements notmore » obvious on time-structure maps. The use of modern workstations allows for appropriate color coding of the total available amplitude range, routine on-screen time/amplitude extraction, and late display of horizon amplitude maps (horizon slices) or complex amplitude-structure spatial visualization. Stratigraphic, structural, tectonic, fluid distribution, and paleogeographic information are commonly obtained by displaying the amplitude variation A = A(x,y,t) associated with a particular reflective surface or seismic interval. As illustrated with several case histories, traditional structural and stratigraphic interpretation combined with a detailed amplitude study generally greatly enhance extraction of subsurface geological information from a reflection data volume. In the context of three-dimensional seismic surveys, the horizon amplitude map (horizon slice), amplitude attachment to structure and [open quotes]bright clouds[close quotes] displays are very powerful tools available to the interpreter.« less

  12. The mother in the text: metapsychology and phantasy in the work of interpretation.

    PubMed

    Petrella, Fausto

    2008-06-01

    In this paper the author discusses some characteristics of a psychoanalytic text on the basis of two pages of Freud's essay, Delusions and dreams in Jensen's 'Gradiva' (Freud, 1906), on the concept of the return of the repressed. Analysis of the text shows that the four references (Horace, Rops, Rousseau, and a clinical vignette) occurring in it present unexpected connections both with each other and with the phenomenon they illustrate. There thus emerges a hidden scenario that reveals a concealed level of the text, relating to the maternal imago. Particular attention is devoted to the importance of the figurative apparatus and images (examples in the form of narrations and visual images, metaphors, and similes) that accompany the metapsychological and conceptual construction of Freud's text. Representation in visual form is necessary for the description and construction of the psyche and for conferring life on its conceptual formulations. However, metapsychological definition also reveals a phantasy dimension underlying the text. In addition, the author shows how certain textual constraints limit the intrinsic intuitive and arbitrary nature of interpretation. Finally, the complexity of the psychoanalytic text (with its various planes and levels) is emphasized, as well as the network of possible connections fundamental to the work of interpretation. A diagram illustrates the spatio-temporal aspects of the interpretive process, as defined by the interaction between conceptual factors and specific flights of the imagination which also have to do with unconscious affects, whether in the text, the author, or the reader.

  13. Comparison of a multimedia simulator to a human model for teaching FAST exam image interpretation and image acquisition.

    PubMed

    Damewood, Sara; Jeanmonod, Donald; Cadigan, Beth

    2011-04-01

    This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam. This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition. Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71-87, vs. median 78, IQR 62-86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12-18 points, vs. median 16, IQR 14-20; p = 0.95), trainee's confidence in obtaining images on a 1-10 visual analog scale (median 5, IQR 4.1-6.5, vs. median 5, IQR 3.7-6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7-5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4-5.9 minutes; p = 0.044). There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models. © 2011 by the Society for Academic Emergency Medicine.

  14. A deep (learning) dive into visual search behaviour of breast radiologists

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2018-03-01

    Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.

  15. EEG Topographic Mapping of Visual and Kinesthetic Imagery in Swimmers.

    PubMed

    Wilson, V E; Dikman, Z; Bird, E I; Williams, J M; Harmison, R; Shaw-Thornton, L; Schwartz, G E

    2016-03-01

    This study investigated differences in QEEG measures between kinesthetic and visual imagery of a 100-m swim in 36 elite competitive swimmers. Background information and post-trial checks controlled for the modality of imagery, swimming skill level, preferred imagery style, intensity of image and task equality. Measures of EEG relative magnitude in theta, low (7-9 Hz) and high alpha (8-10 Hz), and low and high beta were taken from 19 scalp sites during baseline, visual, and kinesthetic imagery. QEEG magnitudes in the low alpha band during the visual and kinesthetic conditions were attenuated from baseline in low band alpha but no changes were seen in any other bands. Swimmers produced more low alpha EEG magnitude during visual versus kinesthetic imagery. This was interpreted as the swimmers having a greater efficiency at producing visual imagery. Participants who reported a strong intensity versus a weaker feeling of the image (kinesthetic) had less low alpha magnitude, i.e., there was use of more cortical resources, but not for the visual condition. These data suggest that low band (7-9 Hz) alpha distinguishes imagery modalities from baseline, visual imagery requires less cortical resources than kinesthetic imagery, and that intense feelings of swimming requires more brain activity than less intense feelings.

  16. Developing the Use of Visual Representations to Explain Basic Astronomy Phenomena

    ERIC Educational Resources Information Center

    Galano, Silvia; Colantonio, Arturo; Leccia, Silvio; Marzoli, Irene; Puddu, Emanuella; Testa, Italo

    2018-01-01

    [This paper is part of the Focused Collection on Astronomy Education Research.] Several decades of research have contributed to our understanding of students' reasoning about astronomical phenomena. Some authors have pointed out the difficulty in reading and interpreting images used in school textbooks as factors that may justify the persistence…

  17. Developing Creativity and Abstraction in Representing Data

    ERIC Educational Resources Information Center

    South, Andy

    2012-01-01

    Creating charts and graphs is all about visual abstraction: the process of representing aspects of data with imagery that can be interpreted by the reader. Children may need help making the link between the "real" and the image. This abstraction can be achieved using symbols, size, colour and position. Where the representation is close to what…

  18. The Versatility of Photo CD Technology in the Classroom.

    ERIC Educational Resources Information Center

    Mustoe, Myles

    The Kodak Photo CD (compact disk) system provides a fun, new, very accessible way to integrate images into geography classroom presentations. Graphicacy deals with spatial information that can only be expressed by a graph, map, or photograph. The importance for geography students to develop visual observation and graphic interpretive skills is…

  19. Creating Meaning through Multimodality: Multiliteracies Assessment and Photo Projects for Online Portfolios

    ERIC Educational Resources Information Center

    Schmerbeck, Nicola; Lucht, Felecia

    2017-01-01

    Actively engaged in online media, learners today are surrounded by texts overtly and covertly transmitted by visual images, sound effects, and voices as well as the written word. Language learning portfolios can engage students in the literacy-oriented learning processes of interpretation, collaboration, and problem solving as outlined by Kern…

  20. Validation of Clay Modeling as a Learning Tool for the Periventricular Structures of the Human Brain

    ERIC Educational Resources Information Center

    Akle, Veronica; Peña-Silva, Ricardo A.; Valencia, Diego M.; Rincón-Perez, Carlos W.

    2018-01-01

    Visualizing anatomical structures and functional processes in three dimensions (3D) are important skills for medical students. However, contemplating 3D structures mentally and interpreting biomedical images can be challenging. This study examines the impact of a new pedagogical approach to teaching neuroanatomy, specifically how building a…

  1. Media Literacy: What, Why, and How?

    ERIC Educational Resources Information Center

    Grace, Donna J.

    2005-01-01

    Literacy has traditionally been associated with the printed word. But today, print literacy is not enough. Children and youth need to learn to "read" and interpret visual images as well. Film, television, videos, DVDs, computer games, and the Internet all hold a prominent and pervasive place in one's culture. Its presence in people's lives is only…

  2. Using Astronomical Photographs to Investigate Misconceptions about Galaxies and Spectra: Question Development for Clicker Use

    ERIC Educational Resources Information Center

    Lee, Hyunju; Schneider, Stephen E.

    2015-01-01

    Many topics in introductory astronomy at the college or high-school level rely implicitly on using astronomical photographs and visual data in class. However, students bring many preconceptions to their understanding of these materials that ultimately lead to misconceptions, and research about students' interpretation of astronomical images has…

  3. High resolution esophageal manometry--the switch from "intuitive" visual interpretation to Chicago classification.

    PubMed

    Srinivas, M; Balakumaran, T A; Palaniappan, S; Srinivasan, Vijaya; Batcha, M; Venkataraman, Jayanthi

    2014-03-01

    High resolution esophageal manometry (HREM) has been interpreted all along by visual interpretation of color plots until the recent introduction of Chicago classification which categorises HREM using objective measurements. It compares HREM diagnosis of esophageal motor disorders by visual interpretation and Chicago classification. Using software Trace 1.2v, 77 consecutive tracings diagnosed by visual interpretation were re-analyzed by Chicago classification and findings compared for concordance between the two systems of interpretation. Kappa agreement rate between the two observations was determined. There were 57 males (74 %) and cohort median age was 41 years (range: 14-83 years). Majority of the referrals were for gastroesophageal reflux disease, dysphagia and achalasia. By "intuitive" visual interpretation, the tracing were reported as normal in 45 (58.4 %), achalasia 14 (18.2 %), ineffective esophageal motility 3 (3.9 %), nutcracker esophagus 11 (14.3 %) and nonspecific motility changes 4 (5.2 %). By Chicago classification, there was 100 % agreement (Kappa 1) for achalasia (type 1: 9; type 2: 5) and ineffective esophageal motility ("failed peristalsis" on visual interpretation). Normal esophageal motility, nutcracker esophagus and nonspecific motility disorder on visual interpretation were reclassified as rapid contraction and esophagogastric junction (EGJ) outflow obstruction by Chicago classification. Chicago classification identified distinct clinical phenotypes including EGJ outflow obstruction not identified by visual interpretation. A significant number of unclassified HREM by visual interpretation were also classified by it.

  4. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  5. Sensitivity Profile for Orientation Selectivity in the Visual Cortex of Goggle-Reared Mice

    PubMed Central

    Yoshida, Takamasa; Ozawa, Katsuya; Tanaka, Shigeru

    2012-01-01

    It has been widely accepted that ocular dominance in the responses of visual cortical neurons can change depending on visual experience in a postnatal period. However, experience-dependent plasticity for orientation selectivity, which is another important response property of visual cortical neurons, is not yet fully understood. To address this issue, using intrinsic signal imaging and two-photon calcium imaging we attempted to observe the alteration of orientation selectivity in the visual cortex of juvenile and adult mice reared with head-mounted goggles, through which animals can experience only the vertical orientation. After one week of goggle rearing, the density of neurons optimally responding to the exposed orientation increased, while that responding to unexposed orientations decreased. These changes can be interpreted as a reallocation of preferred orientations among visually responsive neurons. Our obtained sensitivity profile for orientation selectivity showed a marked peak at 5 weeks and sustained elevation at 12 weeks and later. These features indicate the existence of a critical period between 4 and 7 weeks and residual orientation plasticity in adult mice. The presence of a dip in the sensitivity profile at 10 weeks suggests that different mechanisms are involved in orientation plasticity in childhood and adulthood. PMID:22792390

  6. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  7. Interpretative variability and its impact on the prognostic value of myocardial fatty acid imaging in asymptomatic hemodialysis patients in a multicenter trial in Japan.

    PubMed

    Kiriyama, Tomonari; Kumita, Shin-Ichiro; Moroi, Masao; Nishimura, Tsunehiko; Tamaki, Nagara; Hasebe, Naoyuki; Kikuchi, Kenjiro

    2015-01-01

    The severity of impaired fatty acid utilization in the myocardium can predict cardiac death in asymptomatic patients on hemodialysis. However, interpretive variability and its impact on the prognostic value of myocardial fatty acid imaging are unknown. A total of 677 patients who received hemodialysis for ≥ 20 years and had one or more cardiovascular risk factors underwent (123)I-labeled β-methyl iodophenyl-pentadecanoic acid (BMIPP) single-photon emission computed tomography (SPECT) at 48 hospitals across Japan. SPECT images were interpreted by experts at the nuclear core laboratory and by readers with varying skill levels at clinical centers, based on the standard 17-segment model and 5-point scoring systems, independently. The κ values only reached fair agreement both for overall impression (κ=0.298, normal vs. abnormal) and for categorical impression (κ=0.244, normal vs. mildly abnormal vs. severely abnormal). The normalcy rate was lower in readers at the clinical centers (60.9%) than in experts (69.9%). In contrast to the results assessed by experts, a Kaplan-Meier analysis based on the interpretation by readers at the clinical centers failed to distinguish the risk of events in patients with normal scans from that of patients with mildly abnormal scans. Considerable variability and its impact on prognostic value were observed in the visual interpretation of BMIPP SPECT images between experts and readers at the clinical centers.

  8. Focus information is used to interpret binocular images

    PubMed Central

    Hoffman, David M.; Banks, Martin S.

    2011-01-01

    Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139

  9. Manchester visual query language

    NASA Astrophysics Data System (ADS)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  10. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; hide

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use of harmonic imaging reduces the frequency of nondiagnostic wall segments.

  11. How Strong Is Your Coffee? The Influence of Visual Metaphors and Textual Claims on Consumers’ Flavor Perception and Product Evaluation

    PubMed Central

    Fenko, Anna; de Vries, Roxan; van Rompay, Thomas

    2018-01-01

    This study investigates the relative impact of textual claims and visual metaphors displayed on the product’s package on consumers’ flavor experience and product evaluation. For consumers, strength is one of the most important sensory attributes of coffee. The 2 × 3 between-subjects experiment (N = 123) compared the effects of visual metaphor of strength (an image of a lion located either on top or on the bottom of the package of coffee beans) and the direct textual claim (“extra strong”) on consumers’ responses to coffee, including product expectation, flavor evaluation, strength perception and purchase intention. The results demonstrate that both the textual claim and the visual metaphor can be efficient in communicating the product attribute of strength. The presence of the image positively influenced consumers’ product expectations before tasting. The textual claim increased the perception of strength of coffee and the purchase intention of the product. The location of the image also played an important role in flavor perception and purchase intention. The image located on the bottom of the package increased the perceived strength of coffee and purchase intention of the product compared to the image on top of the package. This result could be interpreted from the perspective of the grounded cognition theory, which suggests that a picture in the lower part of the package would automatically activate the “strong is heavy” metaphor. As heavy objects are usually associated with a position on the ground, this would explain why perceiving a visually heavy package would lead to the experience of a strong coffee. Further research is needed to better understand the relationships between a metaphorical image and its spatial position in food packaging design. PMID:29459840

  12. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  13. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  14. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  15. A Relationship Between Visual Complexity and Aesthetic Appraisal of Car Front Images: An Eye-Tracker Study.

    PubMed

    Chassy, Philippe; Lindell, Trym A E; Jones, Jessica A; Paramei, Galina V

    2015-01-01

    Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10 seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. © The Author(s) 2015.

  16. Irrigated rice area estimation using remote sensing techniques: Project's proposal and preliminary results. [Rio Grande do Sul, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.

    1984-01-01

    The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.

  17. Fetal magnetic resonance imaging (MRI): a tool for a better understanding of normal and abnormal brain development.

    PubMed

    Saleem, Sahar N

    2013-07-01

    Knowledge of the anatomy of the developing fetal brain is essential to detect abnormalities and understand their pathogenesis. Capability of magnetic resonance imaging (MRI) to visualize the brain in utero and to differentiate between its various tissues makes fetal MRI a potential diagnostic and research tool for the developing brain. This article provides an approach to understand the normal and abnormal brain development through schematic interpretation of fetal brain MR images. MRI is a potential screening tool in the second trimester of pregnancies in fetuses at risk for brain anomalies and helps in describing new brain syndromes with in utero presentation. Accurate interpretation of fetal MRI can provide valuable information that helps genetic counseling, facilitates management decisions, and guides therapy. Fetal MRI can help in better understanding the pathogenesis of fetal brain malformations and can support research that could lead to disease-specific interventions.

  18. Intelligent platforms for disease assessment: novel approaches in functional echocardiography.

    PubMed

    Sengupta, Partho P

    2013-11-01

    Accelerating trends in the dynamic digital era (from 2004 onward) has resulted in the emergence of novel parametric imaging tools that allow easy and accurate extraction of quantitative information from cardiac images. This review principally attempts to heighten the awareness of newer emerging paradigms that may advance acquisition, visualization and interpretation of the large functional data sets obtained during cardiac ultrasound imaging. Incorporation of innovative cognitive software that allow advanced pattern recognition and disease forecasting will likely transform the human-machine interface and interpretation process to achieve a more efficient and effective work environment. Novel technologies for automation and big data analytics that are already active in other fields need to be rapidly adapted to the health care environment with new academic-industry collaborations to enrich and accelerate the delivery of newer decision making tools for enhancing patient care. Copyright © 2013. Published by Elsevier Inc.

  19. Envisioning disaster in the 1910 Paris flood.

    PubMed

    Jackson, Jeffrey H

    2011-01-01

    This article uncovers the visual narratives embedded within the photography of the 1910 Paris flood. Images offered Parisians multiple ways to understand and construe the significance of the flood and provided interpretive frameworks to decide the meaning of this event. Investigating three interlocking narratives of ruin, beauty, and fraternité, the article shows how photographs of Paris under water allowed residents to make sense of the destruction but also to imagine the city’s reconstruction. The article concludes with a discussion of the role of visual culture in recovering from urban disasters.

  20. Object segmentation controls image reconstruction from natural scenes

    PubMed Central

    2017-01-01

    The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801

  1. Stereo study as an aid to visual analysis of ERTS and Skylab images

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The parallax on ERTS and Skylab images is sufficiently large for exploitation by human photointerpreters. The ability to view the imagery stereoscopically reduces the signal-to-noise ratio. Stereoscopic examination of orbital data can contribute to studies of spatial, spectral, and temporal variations on the imagery. The combination of true stereo parallax, plus shadow parallax offer many possibilities to human interpreters for making meaningful analyses of orbital imagery.

  2. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  3. On vegetation mapping in Alaska using LANDSAT imagery with primary concerns for method and purpose in satellite image-based vegetation and land-use mapping and the visual interpretation of imagery in photographic format

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A simulated color infrared LANDSAT image covering the western Seward Peninsula was used for identifying and mapping vegetation by direct visual examination. The 1:1,083,400 scale print used was prepared by a color additive process using positive transparencies from MSS bands 4, 5, and 7. Seven color classes were recognized. A vegetation map of 3200 sq km area just west of Fairbanks, Alaska was made. Five colors were recognized on the image and identified to vegetation types roughly equivalent to formations in the UNESCO classification: orange - broadleaf deciduous forest; gray - needleleaf evergreen forest; light violet - subarctic alpine tundra vegetation; violet - broadleaf deciduous shrub thicket; and dull violet - bog vegetation.

  4. Automated detection of analyzable metaphase chromosome cells depicted on scanned digital microscopic images

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Wang, Xingwei; Chen, Xiaodong; Li, Yuhua; Liu, Hong; Li, Shibo; Zheng, Bin

    2010-02-01

    Visually searching for analyzable metaphase chromosome cells under microscopes is quite time-consuming and difficult. To improve detection efficiency, consistency, and diagnostic accuracy, an automated microscopic image scanning system was developed and tested to directly acquire digital images with sufficient spatial resolution for clinical diagnosis. A computer-aided detection (CAD) scheme was also developed and integrated into the image scanning system to search for and detect the regions of interest (ROI) that contain analyzable metaphase chromosome cells in the large volume of scanned images acquired from one specimen. Thus, the cytogeneticists only need to observe and interpret the limited number of ROIs. In this study, the high-resolution microscopic image scanning and CAD performance was investigated and evaluated using nine sets of images scanned from either bone marrow (three) or blood (six) specimens for diagnosis of leukemia. The automated CAD-selection results were compared with the visual selection. In the experiment, the cytogeneticists first visually searched for the analyzable metaphase chromosome cells from specimens under microscopes. The specimens were also automated scanned and followed by applying the CAD scheme to detect and save ROIs containing analyzable cells while deleting the others. The automated selected ROIs were then examined by a panel of three cytogeneticists. From the scanned images, CAD selected more analyzable cells than initially visual examinations of the cytogeneticists in both blood and bone marrow specimens. In general, CAD had higher performance in analyzing blood specimens. Even in three bone marrow specimens, CAD selected 50, 22, 9 ROIs, respectively. Except matching with the initially visual selection of 9, 7, and 5 analyzable cells in these three specimens, the cytogeneticists also selected 41, 15 and 4 new analyzable cells, which were missed in initially visual searching. This experiment showed the feasibility of applying this CAD-guided high-resolution microscopic image scanning system to prescreen and select ROIs that may contain analyzable metaphase chromosome cells. The success and the further improvement of this automated scanning system may have great impact on the future clinical practice in genetic laboratories to detect and diagnose diseases.

  5. Neural dynamics of image representation in the primary visual cortex

    PubMed Central

    Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing

    2013-01-01

    Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role of computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption on computational theories of efficient coding (Olshausen and Field 1996), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al. 2006, von der Heydt et al. 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex. PMID:22944076

  6. Images by the vineyard: images of addiction and substance users in the media and other culture sites/sights.

    PubMed

    Allamani, Allaman; Mattiacci, Silvia

    2015-03-01

    This article constitutes a discovery journey into the world of drinking images, the pleasures and harms related to consuming alcoholic beverages, as well as the relationships between drinking and spirituality. These aspects are described historically and globally, over time through a series of snapshots and mini-discussions about both visual and mental images from art, classical literature and operatic music.The images are interpreted according to how they represent the drinking culture within which they were created and sustained, and how they are able to involve the spectator and the user in terms of either empathizing, accepting and including or distancing, stigmatizing and marginalizing the user.

  7. A Comparative Study of Video Presentation Modes in Relation to L2 Listening Success

    ERIC Educational Resources Information Center

    Li, Chen-Hong

    2016-01-01

    Video comprehension involves interpreting both sounds and images. Research has shown that processing an aural text with relevant pictorial information effectively enhances second/foreign language (L2) listening comprehension. A hypothesis underlying this mixed-methods study is that a visual-only silent film used as an advance organiser to activate…

  8. Geologic interpretation of LANDSAT satellite images for the Qattara Depression area, Egypt

    NASA Technical Reports Server (NTRS)

    Elshazly, E. M.; Abdel-Hady, M. A.; Elghawaby, M. A.; Khawasik, S. M.; Elshazly, M. M. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. For the first time the regional geological units are given. Faults, fractures, and folds are included, as well as drainage lines which help to visualize the environmental impact of the Qattara project for electric power generation and to assess the regional questions involved in its implementation.

  9. Moccasin on One Foot, High Heel on the Other: Life Story Reflections of Coreen Gladue

    ERIC Educational Resources Information Center

    Vannini, April; Gladue, Coreen

    2009-01-01

    Drawing from life history interviews with Coreen Gladue--a Cree/Metis woman resident of British Columbia, Canada--this article uses poetic representation and visual images to tell stories about Coreen's sense of self and identity, family relations, education, and interpretation of the meanings of Canada's "Indian Act". Poems and…

  10. Elementary Teachers' Selection and Use of Visual Models

    NASA Astrophysics Data System (ADS)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  11. Use of digital Munsell color space to assist interretation of imaging spectrometer data: Geologic examples from the northern Grapevine Mountains, California and Nevada

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Knepper, D. H., Jr.; Clark, R. N.

    1986-01-01

    Techniques using Munsell color transformations were developed for reducing 128 channels (or less) of Airborne Imaging Spectrometer (AIS) data to a single color-composite-image suitable for both visual interpretation and digital analysis. Using AIS data acquired in 1984 and 1985, limestone and dolomite roof pendants and sericite-illite and other clay minerals related to alteration were mapped in a quartz monzonite stock in the northern Grapevine Mountains of California and Nevada. Field studies and laboratory spectral measurements verify the mineralogical distributions mapped from the AIS data.

  12. [Artificial sight: recent developments].

    PubMed

    Zeitz, O; Keserü, M; Hornig, R; Richard, G

    2009-03-01

    The implantation of electronic retina stimulators appears to be a future possibility to restore vision, at least partially in patients with retinal degeneration. The idea of such visual prostheses is not new but due to the general technical progress it has become more likely that a functioning implant will be on the market soon. Visual prosthesis may be integrated in the visual system in various places. Thus there are subretinal and epiretinal implants, as well as implants that are connected directly to the optic nerve or the visual cortex. The epiretinal approach is the most promising at the moment, but the problem of appropriate modulation of the image information is unsolved so far. This will be necessary to provide a interpretable visual information to the brain. The present article summarises the concepts and includes some latest information from recent conferences.

  13. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  14. Evaluation of quantitative image analysis criteria for the high-resolution microendoscopic detection of neoplasia in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2010-03-01

    Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.

  15. Bone age maturity assessment using hand-held device

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Gilsanz, Vicente; Liu, Xiaodong; Boechat, M. I.

    2004-04-01

    Purpose: Assessment of bone maturity is traditionally performed through visual comparison of hand and wrist radiograph with existing reference images in textbooks. Our goal was to develop a digital index based on idealized hand Xray images that can be incorporated in a hand held computer and used for visual assessment of bone age for patients. Material and methods: Due to the large variability in bone maturation in normals, we generated a set of "ideal" images obtained by computer combinations of images from our normal reference data sets. Software for hand-held PDA devices was developed for easy navigation through the set of images and visual selection of matching images. A formula based on our statistical analysis provides the standard deviation from normal based on the chronological age of the patient. The accuracy of the program was compared to traditional interpretation by two radiologists in a double blind reading of 200 normal Caucasian children (100 boys, 100 girls). Results: Strong correlations were present between chronological age and bone age (r > 0.9) with no statistical difference between the digital and traditional assessment methods. Determinations of carpal bone maturity in adolescents was slightly more accurate using the digital system. The users did praise the convenience and effectiveness of the digital Palm Index in clinical practice. Conclusion: An idealized digital Palm Bone Age Index provides a convenient and effective alternative to conventional atlases for the assessment of skeletal maturity.

  16. Development of a 3-D X-ray system

    NASA Astrophysics Data System (ADS)

    Evans, James Paul Owain

    The interpretation of standard two-dimensional x-ray images by humans is often very difficult. This is due to the lack of visual cues to depth in an image which has been produced by transmitted radiation. The solution put forward in this research is to introduce binocular parallax, a powerful physiological depth cue, into the resultant shadowgraph x-ray image. This has been achieved by developing a binocular stereoscopic x-ray imaging technique, which can be used for both visual inspection by human observers and also for the extraction of three-dimensional co-ordinate information. The technique is implemented in the design and development of two experimental x-ray systems and also the development of measurement algorithms. The first experimental machine is based on standard linear x-ray detector arrays and was designed as an optimum configuration for visual inspection by human observers. However, it was felt that a combination of the 3-D visual inspection capability together with a measurement facility would enhance the usefulness of the technique. Therefore, both a theoretical and an empirical analysis of the co-ordinate measurement capability of the machine has been carried out. The measurement is based on close-range photogrammetric techniques. The accuracy of the measurement has been found to be of the order of 4mm in x, 3mm in y and 6mm in z. A second experimental machine was developed and based on the same technique as that used for the first machine. However, a major departure has been the introduction of a dual energy linear x-ray detector array which will allow, in general, the discrimination between organic and inorganic substances. The second design is a compromise between ease of visual inspection for human observers and optimum three-dimensional co-ordinate measurement capability. The system is part of an on going research programme into the possibility of introducing psychological depth cues into the resultant x-ray images. The research presented in this thesis was initiated to enhance the visual interpretation of complex x-ray images, specifically in response to problems encountered in the routine screening of freight by HM. Customs and Excise. This phase of the work culminated in the development of the first experimental machine. During this work the security industry was starting to adopt a new type of x-ray detector, namely the dual energy x-ray sensor. The Department of Transport made available funding to the Police Scientific Development Branch (P.S.D.B.), part of The Home Office Science and Technology Group, to investigate the possibility of utilising the dual energy sensor in a 3-D x-ray screening system. This phase of the work culminated in the development of the second experimental machine.

  17. Storage and retrieval of digital images in dermatology.

    PubMed

    Bittorf, A; Krejci-Papa, N C; Diepgen, T L

    1995-11-01

    Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.

  18. Visual design: a step towards multicultural health care.

    PubMed

    Alvarez, Juliana

    2014-02-01

    Standing at the crossroads of anthropology, communication, industrial design and new technology theories, this article describes the communication challenges posed during hospital emergencies resulting from linguistic and cultural differences between health care professionals and patients. In order to overcome communication barriers, the proposal of a visual solution was analyzed. Likewise, the problem was studied based on the concepts of perception, comprehension, interpretation and graphic representation according to visual culture and semiotics theories. One hundred and ffty images showing symptoms were analyzed in order to identify a pluricultural iconographic code. Results enabled to develop a list of design criteria and create the application: "My Symptoms Translator" as an option to overcome verbal language barriers and cultural differences.

  19. Using visual art and collaborative reflection to explore medical attitudes toward vulnerable persons

    PubMed Central

    Kidd, Monica; Nixon, Lara; Rosenal, Tom; Jackson, Roberta; Pereles, Laurie; Mitchell, Ian; Bendiak, Glenda; Hughes, Lisa

    2016-01-01

    Background Vulnerable persons often face stigma-related barriers while seeking health care. Innovative education and professional development methods are needed to help change this. Method We describe an interdisciplinary group workshop designed around a discomfiting oil portrait, intended to trigger provocative conversations among health care students and practitioners, and we present our mixed methods analysis of participant reflections. Results After the workshop, participants were significantly more likely to endorse the statements that the observation and interpretive skills involved in viewing visual art are relevant to patient care and that visual art should be used in medical education to improve students’ observational skills, narrative skills, and empathy with their patients. Subsequent to the workshop, significantly more participants agreed that art interpretation should be required curriculum for health care students. Qualitative comments from two groups from two different education and professional contexts were examined for themes; conversations focused on issues of power, body image/self-esteem, and lessons for clinical practice. Conclusions We argue that difficult conversations about affective responses to vulnerable persons are possible in a collaborative context using well-chosen works of visual art that can stand in for a patient. PMID:27103949

  20. Using visual art and collaborative reflection to explore medical attitudes toward vulnerable persons.

    PubMed

    Kidd, Monica; Nixon, Lara; Rosenal, Tom; Jackson, Roberta; Pereles, Laurie; Mitchell, Ian; Bendiak, Glenda; Hughes, Lisa

    2016-01-01

    Vulnerable persons often face stigma-related barriers while seeking health care. Innovative education and professional development methods are needed to help change this. We describe an interdisciplinary group workshop designed around a discomfiting oil portrait, intended to trigger provocative conversations among health care students and practitioners, and we present our mixed methods analysis of participant reflections. After the workshop, participants were significantly more likely to endorse the statements that the observation and interpretive skills involved in viewing visual art are relevant to patient care and that visual art should be used in medical education to improve students' observational skills, narrative skills, and empathy with their patients. Subsequent to the workshop, significantly more participants agreed that art interpretation should be required curriculum for health care students. Qualitative comments from two groups from two different education and professional contexts were examined for themes; conversations focused on issues of power, body image/self-esteem, and lessons for clinical practice. We argue that difficult conversations about affective responses to vulnerable persons are possible in a collaborative context using well-chosen works of visual art that can stand in for a patient.

  1. Walking modulates speed sensitivity in Drosophila motion vision.

    PubMed

    Chiappe, M Eugenia; Seelig, Johannes D; Reiser, Michael B; Jayaraman, Vivek

    2010-08-24

    Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Correlation mapping for visualizing propagation of pulsatile CSF motion in intracranial space based on magnetic resonance phase contrast velocity images: preliminary results.

    PubMed

    Yatsushiro, Satoshi; Hirayama, Akihiro; Matsumae, Mitsunori; Kajiwara, Nao; Abdullah, Afnizanfaizal; Kuroda, Kagayaki

    2014-01-01

    Correlation time mapping based on magnetic resonance (MR) velocimetry has been applied to pulsatile cerebrospinal fluid (CSF) motion to visualize the pressure transmission between CSF at different locations and/or between CSF and arterial blood flow. Healthy volunteer experiments demonstrated that the technique exhibited transmitting pulsatile CSF motion from CSF space in the vicinity of blood vessels with short delay and relatively high correlation coefficients. Patient and healthy volunteer experiments indicated that the properties of CSF motion were different from the healthy volunteers. Resultant images in healthy volunteers implied that there were slight individual difference in the CSF driving source locations. Clinical interpretation for these preliminary results is required to apply the present technique for classifying status of hydrocephalus.

  3. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  4. Preliminary remote sensing assessment of the catastrophic avalanche in Langtang Valley induced by the 2015 Gorkha earthquake, Nepal

    NASA Astrophysics Data System (ADS)

    Nagai, Hiroto; Watanabe, Manabu; Tomii, Naoya

    2016-04-01

    A major earthquake, measuring 7.8 Mw, occurred on April 25, 2015, in Lamjung district, central Nepal, causing more than 9,000 deaths and 23,000 injuries. During the event, termed the 2015 Gorkha earthquake, the most catastrophic collapse of the mountain side was reported in the Langtang Valley, located 60 km north of Kathmandu. In this collapse, a huge boulder-rich avalanche and a sudden air pressure wave traveled from a steep south-facing slope to the bottom of a U-shaped valley, resulting in more than 170 deaths. Accurate in-situ surveys are necessary to investigate such events, and to find out ways to avoid similar catastrophic events in the future. Geospatial information obtained from multiple satellite observations is invaluable for such surveys in remote mountain regions. In this study, we (1) identify the collapsed sediment using synthetic aperture radar, (2) conduct detailed mapping using high-resolution optical imagery, and (3) estimate sediment volumes from digital surface models in order to quantify the immediate situation of the avalanched sediment. (1) Visual interpretation and coherence calculations using Phased Array type L-band Synthetic Aperture Radar-2 (PALSAR-2) images give a consistent area of sediment cover. Emergency observation was carried out the day after the earthquake, using the PALSAR-2 onboard the Advanced Land Observing Satellite-2 (ALOS-2, "DAICHI-2"). Visual interpretation of orthorectified backscatter amplitude images revealed completely altered surface features, over which the identifiable sediment cover extended for 0.73 km2 (28°13'N, 85°30'E). Additionally, measuring the decrease in normalized coherence quantifies the similarity between the pre- and post-event surface features, after the removal of numerous noise patches by focal statistics. Calculations within the study area revealed high-value areas corresponding to the visually identified sediment area. Visual interpretation of the amplitude images and the coherence calculations thus produce similar extractions of collapse sediment. (2) Visual interpretation of high-resolution satellite imagery suggests multiple layers of sediment with different physical properties. A DigitalGlobe satellite, WorldView-3, observed the Langtang Valley on May 8, 2015, using a panchromatic sensor with a spatial resolution of 0.3 m. Identification and mapping of avalanche-induced surface features were performed manually. The surface features were classified into 15 segments on the basis of sediment features, including darkness, the dominance of scattering or flowing features, and the recognition of boulders. Together, these characteristics suggest various combinations of physical properties, such as viscosity, density, and ice and snow content. (3) Altitude differences between the pre- and post-quake digital surface models (DSM) suggest the deposition of 5.2×105 m3 of sediment, mainly along the river bed. A 5 m-grid pre-event DSM was generated from PRISM stereo-pair images acquired on October 12, 2008. A 2 m-grid post-event DSM was generated from WorldView-3 images acquired on May 8, 2015. Comparing the two DSMs, a vertical difference of up to 22±13 m is observed, mainly along the river bed. Estimates of the total avalanched volume reach 5.2×105 m^3, with a possible range of 3.7×105 to 10.7×105 m^3.

  5. A new standard of visual data representation for imaging mass spectrometry.

    PubMed

    O'Rourke, Matthew B; Padula, Matthew P

    2017-03-01

    MALDI imaging MS (IMS) is principally used for cancer diagnostics. In our own experience with publishing IMS data, we have been requested to modify our protocols with respect to the areas of the tissue that are imaged in order to comply with the wider literature. In light of this, we have determined that current methodologies lack effective controls and can potentially introduce bias by only imaging specific areas of the targeted tissue EXPERIMENTAL DESIGN: A previously imaged sample was selected and then cropped in different ways to show the potential effect of only imaging targeted areas. By using a model sample, we were able to effectively show how selective imaging of samples can misinterpret tissue features and by changing the areas that are acquired, according to our new standard, an effective internal control can be introduced. Current IMS sampling convention relies on the assumption that sample preparation has been performed correctly. This prevents users from checking whether molecules have moved beyond borders of the tissue due to delocalization and consequentially products of improper sample preparation could be interpreted as biological features that are of critical importance when encountered in a visual diagnostic. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Image-based diagnostic aid for interstitial lung disease with secondary data integration

    NASA Astrophysics Data System (ADS)

    Depeursinge, Adrien; Müller, Henning; Hidki, Asmâa; Poletti, Pierre-Alexandre; Platon, Alexandra; Geissbuhler, Antoine

    2007-03-01

    Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database can bring quick and precious information for example for emergency radiologists. The experience from a pilot project highlighted the need for detailed database containing high-quality annotations in addition to clinical data. The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung disease with secondary data integration. The data acquisition steps are detailed. The selection of the most relevant clinical parameters is done in collaboration with lung specialists from current literature, along with knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions of interest annotated. The available data was used to test primary visual features for the classification of lung tissue patterns. These features show good discriminative properties for the separation of five classes of visual observations.

  7. Utility of shallow-water ATRIS images in defining biogeologic processes and self-similarity in skeletal scleractinia, Florida reefs

    USGS Publications Warehouse

    Lidz, B.H.; Brock, J.C.; Nagle, D.B.

    2008-01-01

    A recently developed remote-sensing instrument acquires high-quality digital photographs in shallow-marine settings within water depths of 15 m. The technology, known as the Along-Track Reef-Imaging System, provides remarkably clear, georeferenced imagery that allows visual interpretation of benthic class (substrates, organisms) for mapping coral reef habitats, as intended. Unforeseen, however, are functions new to the initial technologic purpose: interpr??table evidence for real-time biogeologic processes and for perception of scaled-up skeletal self-similarity of scleractinian microstructure. Florida reef sea trials lacked the grid structure required to map contiguous habitat and submarine topography. Thus, only general observations could be made relative to times and sites of imagery. Degradation of corals was nearly universal; absence of reef fish was profound. However, ???1% of more than 23,600 sea-trial images examined provided visual evidence for local environs and processes. Clarity in many images was so exceptional that small tracks left by organisms traversing fine-grained carbonate sand were visible. Other images revealed a compelling sense, not yet fully understood, of the microscopic wall structure characteristic of scleractinian corals. Conclusions drawn from classifiable images are that demersal marine animals, where imaged, are oblivious to the equipment and that the technology has strong capabilities beyond mapping habitat. Imagery acquired along predetermined transects that cross a variety of geomorphic features within depth limits will ( 1) facilitate construction of accurate contour maps of habitat and bathymetry without need for ground-truthing, (2) contain a strong geologic component of interpreted real-time processes as they relate to imaged topography and regional geomorphology, and (3) allow cost-effective monitoring of regional- and local-scale changes in an ecosystem by use of existing-image global-positioning system coordinates to re-image areas. Details revealed in the modern setting have taphonomic implications for what is often found in the geologic record.

  8. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  9. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  10. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  11. Hazardous Continuation Backward in Time in Nonlinear Parabolic Equations, and an Experiment in Deblurring Nonlinearly Blurred Imagery

    PubMed Central

    Carasso, Alfred S

    2013-01-01

    Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930’s, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes. PMID:26401430

  12. Hazardous Continuation Backward in Time in Nonlinear Parabolic Equations, and an Experiment in Deblurring Nonlinearly Blurred Imagery.

    PubMed

    Carasso, Alfred S

    2013-01-01

    Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930's, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes.

  13. Using Scientific Visualizations to Enhance Scientific Thinking In K-12 Geoscience Education

    NASA Astrophysics Data System (ADS)

    Robeck, E.

    2016-12-01

    The same scientific visualizations, animations, and images that are powerful tools for geoscientists can serve an important role in K-12 geoscience education by encouraging students to communicate in ways that help them develop habits of thought that are similar to those used by scientists. Resources such as those created by NASA's Scientific Visualization Studio (SVS), which are intended to inform researchers and the public about NASA missions, can be used in classrooms to promote thoughtful, engaged learning. Instructional materials that make use of those visualizations have been developed and are being used in K-12 classrooms in ways that demonstrate the vitality of the geosciences. For example, the Center for Geoscience and Society at the American Geosciences Institute (AGI) helped to develop a publication that outlines an inquiry-based approach to introducing students to the interpretation of scientific visualizations, even when they have had little to no prior experience with such media. To facilitate these uses, the SVS team worked with Center staff and others to adapt the visualizations, primarily by removing most of the labels and annotations. Engaging with these visually compelling resources serves as an invitation for students to ask questions, interpret data, draw conclusions, and make use of other processes that are key components of scientific thought. This presentation will share specific resources for K-12 teaching (all of which are available online, from NASA, and/or from AGI), as well as the instructional principles that they incorporate.

  14. A review of computer aided interpretation technology for the evaluation of radiographs of aluminum welds

    NASA Technical Reports Server (NTRS)

    Lloyd, J. F., Sr.

    1987-01-01

    Industrial radiography is a well established, reliable means of providing nondestructive structural integrity information. The majority of industrial radiographs are interpreted by trained human eyes using transmitted light and various visual aids. Hundreds of miles of radiographic information are evaluated, documented and archived annually. In many instances, there are serious considerations in terms of interpreter fatigue, subjectivity and limited archival space. Quite often it is difficult to quickly retrieve radiographic information for further analysis or investigation. Methods of improving the quality and efficiency of the radiographic process are being explored, developed and incorporated whenever feasible. High resolution cameras, digital image processing, and mass digital data storage offer interesting possibilities for improving the industrial radiographic process. A review is presented of computer aided radiographic interpretation technology in terms of how it could be used to enhance the radiographic interpretation process in evaluating radiographs of aluminum welds.

  15. The Influence of Visual and Spatial Reasoning in Interpreting Simulated 3D Worlds.

    ERIC Educational Resources Information Center

    Lowrie, Tom

    2002-01-01

    Explores ways in which 6-year-old children make sense of screen-based images on the computer. Uses both static and relatively dynamic software programs in the investigation. Suggests that young children should be exposed to activities that establish explicit links between 2D and 3D objects away from the computer before attempting difficult links…

  16. Visualizing Airborne and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Bierwirth, Victoria A.

    2011-01-01

    Remote sensing is a process able to provide information about Earth to better understand Earth's processes and assist in monitoring Earth's resources. The Cloud Absorption Radiometer (CAR) is one remote sensing instrument dedicated to the cause of collecting data on anthropogenic influences on Earth as well as assisting scientists in understanding land-surface and atmospheric interactions. Landsat is a satellite program dedicated to collecting repetitive coverage of the continental Earth surfaces in seven regions of the electromagnetic spectrum. Combining these two aircraft and satellite remote sensing instruments will provide a detailed and comprehensive data collection able to provide influential information and improve predictions of changes in the future. This project acquired, interpreted, and created composite images from satellite data acquired from Landsat 4-5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper plus (ETM+). Landsat images were processed for areas covered by CAR during the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCT AS), Cloud and Land Surface Interaction Campaign (CLASIC), Intercontinental Chemical Transport Experiment-Phase B (INTEXB), and Southern African Regional Science Initiative (SAFARI) 2000 missions. The acquisition of Landsat data will provide supplemental information to assist in visualizing and interpreting airborne and satellite imagery.

  17. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  18. [Three-dimensional reconstruction of functional brain images].

    PubMed

    Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I

    1999-08-01

    We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.

  19. Remote sensing data applied to the evaluation of soil erosion caused by land-use. Ribeirao Anhumas Basin Area: A case study. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dosanjosferreirapinto, S.; Kux, H. J. H.

    1980-01-01

    Formerly covered by a tropical forest, the study area was deforested in the early 40's for coffee plantation and cattle raising, which caused intense gully erosion problems. To develop a method to analyze the relationship between land use and soil erosion, visual interpretations of aerial photographs (scale 1:25.000), MSS-LANDSAT imagery (scale 1:250,000), as well as automatic interpretation of computer compatible tapes by IMAGE-100 system were carried out. From visual interpretation the following data were obtained: land use and cover tapes, slope classes, ravine frequency, and a texture sketch map. During field work, soil samples were collected for texture and X-ray analysis. The texture sketch map indicate that the areas with higher slope angles have a higher susceptibilty to the development of gullies. Also, the over carriage of pastureland, together with very friable lithologies (mainly sandstone) occuring in that area, seem to be the main factors influencing the catastrophic extension of ravines in the study site.

  20. NASA/NOAA: Earth Science Electronic Theater 1999. Earth Science Observations, Analysis and Visualization: Roots in the 60s - Vision for the Next Millennium

    NASA Technical Reports Server (NTRS)

    Hasler, A. Fritz

    1999-01-01

    The Etheater presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966, to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA''s visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS. The visualizations are produced by the NASA Goddard Visualization & Analysis Laboratory, and Scientific Visualization Studio, as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the Earth Science ETheater 1999 recently presented in Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York, and Dallas. The presentation Jan 11-14 at the AMS meeting in Dallas used a 4-CPU SGI/CRAY Onyx Infinite Reality Super Graphics Workstation with 8 GB RAM and a Terabyte Disk at 3840 X 1024 resolution with triple synchronized BarcoReality 9200 projectors on a 60ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space Museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense HyperImage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, HyperImage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing HyperImage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed Image SpreadSheet (DISS). The DISS is being used as a high performance testbed Next Generation Internet (NGI) VisAnalysis of: 1) El Nino SSTs and NDVI response 2) Latest GOES 10 5-min rapid Scans of 26 day 5000 frame movie of March & April 198 weather and tornadic storms 3) TRMM rainfall and lightning 4)GOES 9 satellite images/winds and NOAA aircraft radar of hurricane Luis, 5) lightning detector data merged with GOES image sequences, 6) Japanese GMS, TRMM, & ADEOS data 7) Chinese FY2 data 8) Meteosat & ERS/ATSR data 9) synchronized manipulation of multiple 3D numerical model views; etc. will be illustrated. The Image SpreadSheet has been highly successful in producing Earth science visualizations for public outreach.

  1. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  2. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  3. GEOBIA For Land Use Mapping Using Worldview2 Image In Bengkak Village Coastal, Banyuwangi Regency, East Java

    NASA Astrophysics Data System (ADS)

    Alrassi, Fitzastri; Salim, Emil; Nina, Anastasia; Alwi, Luthfi; Danoedoro, Projo; Kamal, Muhammad

    2016-11-01

    The east coast of Banyuwangi regency has a diverse variety of land use such as ponds, mangroves, agricultural fields and settlements. WorldView-2 is a multispectral image with high spatial resolution that can display detailed information of land use. Geographic Object Based Image Analysis (GEOBIA) classification technique uses object segments as the smallest unit of analysis. The segmentation and classification process is not only based on spectral value of the image but also considering other elements of the image interpretation. This gives GEOBIA an opportunities and challenges in the mapping and monitoring of land use. This research aims to assess the GEOBIA classification method for generating the classification of land use in coastal areas of Banyuwangi. The result of this study is land use classification map produced by GEOBIA classification. We verified the accuracy of the resulted land use map by comparing the map with result from visual interpretation of the image that have been validated through field surveys. Variation of land use in most of the east coast of Banyuwangi regency is dominated by mangrove, agricultural fields, mixed farms, settlements and ponds.

  4. Radiology image perception and observer performance: How does expertise and clinical information alter interpretation? Stroke detection explored through eye-tracking

    NASA Astrophysics Data System (ADS)

    Cooper, Lindsey; Gale, Alastair; Darker, Iain; Toms, Andoni; Saada, Janak

    2009-02-01

    Historically, radiology research has been dominated by chest and breast screening. Few studies have examined complex interpretative tasks such as the reading of multidimensional brain CT or MRI scans. Additionally, no studies at the time of writing have explored the interpretation of stroke images; from novices through to experienced practitioners using eye movement analysis. Finally, there appears a lack of evidence on the clinical effects of radiology reports and their influence on image appraisal and clinical diagnosis. A computer-based, eye-tracking study was designed to assess diagnostic accuracy and interpretation in stroke CT and MR imagery. Eight predetermined clinical cases, five images per case, were presented to participants (novices, trainee, and radiologists; n=8). The presence or absence of abnormalities was rated on a five-point Likert scale and their locations reported. Half cases of the cases were accompanied by clinical information; half were not, to assess the impact of information on observer performance. Results highlight differences in visual search patterns amongst novice, trainee and expert observers; the most marked differences occurred between novice readers and experts. Experts spent more time in challenging areas of interest (AOI) than novices and trainee, and were more confident unless a lesion was large and obvious. The time to first AOI fixation differed by size, shape and clarity of lesion. 'Time to lesion' dropped significantly when recognition appeared to occur between slices. The influence of clinical information was minimal.

  5. Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.

    1982-01-01

    A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.

  6. Phase contrast imaging of buccal mucosa tissues-Feasibility study

    NASA Astrophysics Data System (ADS)

    Fatima, A.; Tripathi, S.; Shripathi, T.; Kulkarni, V. K.; Banda, N. R.; Agrawal, A. K.; Sarkar, P. S.; Kashyap, Y.; Sinha, A.

    2015-06-01

    Phase Contrast Imaging (PCI) technique has been used to interpret physical parameters obtained from the image taken on the normal buccal mucosa tissue extracted from cheek of a patient. The advantages of this method over the conventional imaging techniques are discussed. PCI technique uses the X-ray phase shift at the edges differentiated by very minute density differences and the edge enhanced high contrast images reveal details of soft tissues. The contrast in the images produced is related to changes in the X-ray refractive index of the tissues resulting in higher clarity compared with conventional absorption based X-ray imaging. The results show that this type of imaging has better ability to visualize microstructures of biological soft tissues with good contrast, which can lead to the diagnosis of lesions at an early stage of the diseases.

  7. Remote Sensing as a First Step in Geothermal Exploration in the Xilingol Volcanic Field in NE China

    NASA Astrophysics Data System (ADS)

    Peng, F.; Huang, S.; Xiong, Y.

    2013-12-01

    Geothermal energy is a renewable and low-carbon energy source independent of climate change. It is most abundant in Cenozoic volcanic areas where high temperature can be obtained within a relatively shallow depth. Geological structures play an important role in the transfer and storage of geothermal energy. Like other geological resources, geothermal resource prospecting and exploration require a good understanding of the host media. Remote sensing (RS) has the advantages of high spatial and temporal resolution and broad spatial coverage over the conventional geological and geophysical prospecting techniques, while geographical information system (GIS) has intuitive, flexible, and convenient characteristics. In this study, RS and GIS techniques are utilized to prospect the geothermal energy potential in Xilingol, a Cenozoic volcanic area in the eastern Inner Mongolia, NE China. Landsat TM/ETM+ multi-temporal images taken under clear-sky conditions, digital elevation model (DEM) data, and other auxiliary data including geological maps of 1:2,500,000 and 1:200,000 scales are used in this study. The land surface temperature (LST) of the study area is retrieved from the Landsat images with a single-channel algorithm. Prior to the LST retrieval, the imagery data are preprocessed to eliminate abnormal values by reference to the normalized difference vegetation index (NDVI) and the improved normalized water index (MNDWI) on the ENVI platform developed by ITT Visual Information Solutions. Linear and circular geological structures are then inferred through visual interpretation of the LST maps with references to the existing geological maps in conjunction with the computer automatic interpretation features such as lineament frequency, lineament density, and lineament intersection. Several useful techniques such as principal component analysis (PCA), image classification, vegetation suppression, multi-temporal comparative analysis, and 3D Surface View based on DEM data are used to further enable a better visual geologic interpretation with the Landsat imagery of Xilingol. Several major volcanism controlling faults and Cenozoic volcanic eruption centers have been recognized from the linear and circular structures in the remote sensing images. The result shows that the major faults in the study area are mainly NEE oriented. Hidden faults and deep structures are inferred from the analysis of distribution regularities of linear and circular structures. Especially, the swarms of craters northwest to the Dalinuoer Lake appear to be controlled by some NEE trending hidden basement fractures. The intersecting areas of the NEE linear structures with NW trending structures overlapped by the circular structures are the favorable regions for geothermal resources. Seven areas have been preliminarily identified as the targets for further prospecting geothermal energy based on the visual interpretation of the geological structures. The study shows that RS and GIS have great application potential in the geothermal exploration in volcanic areas and will promote the exploration of renewable energy resources of great potential.

  8. Novel algorithm to identify and differentiate specific digital signature of breath sound in patients with diffuse parenchymal lung disease.

    PubMed

    Bhattacharyya, Parthasarathi; Mondal, Ashok; Dey, Rana; Saha, Dipanjan; Saha, Goutam

    2015-05-01

    Auscultation is an important part of the clinical examination of different lung diseases. Objective analysis of lung sounds based on underlying characteristics and its subsequent automatic interpretations may help a clinical practice. We collected the breath sounds from 8 normal subjects and 20 diffuse parenchymal lung disease (DPLD) patients using a newly developed instrument and then filtered off the heart sounds using a novel technology. The collected sounds were thereafter analysed digitally on several characteristics as dynamical complexity, texture information and regularity index to find and define their unique digital signatures for differentiating normality and abnormality. For convenience of testing, these characteristic signatures of normal and DPLD lung sounds were transformed into coloured visual representations. The predictive power of these images has been validated by six independent observers that include three physicians. The proposed method gives a classification accuracy of 100% for composite features for both the normal as well as lung sound signals from DPLD patients. When tested by independent observers on the visually transformed images, the positive predictive value to diagnose the normality and DPLD remained 100%. The lung sounds from the normal and DPLD subjects could be differentiated and expressed according to their digital signatures. On visual transformation to coloured images, they retain 100% predictive power. This technique may assist physicians to diagnose DPLD from visual images bearing the digital signature of the condition. © 2015 Asian Pacific Society of Respirology.

  9. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  10. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  11. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  12. Spatial thermal radiometry contribution to the Massif Armoricain and the Massif Central France litho-structural study

    NASA Technical Reports Server (NTRS)

    Scanvic, J. Y. (Principal Investigator)

    1980-01-01

    Thermal zones delimited on HCMM images, by visual interpretation only, were correlated with geological units and carbonated rocks, granitic, and volcanic rocks were individualized rock signature is evolutive parameter and some distinctions were made by addition of day, night and seasonal thermal image interpretation. This analysis also demonstrated that forest cover does not mask the underlying rocks thermal signature. Thermal linears are associated with known tectonics but the observed thermal variations from day to night and from one to another represent a promising concept to be studied in function of neotectonics and hydrogeology. The thermal anomalies discovered represent a potential interest which is to be evaluated. Significant results were obtained in the Mont Dore area and additional geological targets were defined in the Paris Basin and the Montmarault granite.

  13. Large Oil Spill Classification Using SAR Images Based on Spatial Histogram

    NASA Astrophysics Data System (ADS)

    Schvartzman, I.; Havivi, S.; Maman, S.; Rotman, S. R.; Blumberg, D. G.

    2016-06-01

    Among the different types of marine pollution, oil spill is a major threat to the sea ecosystems. Remote sensing is used in oil spill response. Synthetic Aperture Radar (SAR) is an active microwave sensor that operates under all weather conditions and provides information about the surface roughness and covers large areas at a high spatial resolution. SAR is widely used to identify and track pollutants in the sea, which may be due to a secondary effect of a large natural disaster or by a man-made one . The detection of oil spill in SAR imagery relies on the decrease of the backscattering from the sea surface, due to the increased viscosity, resulting in a dark formation that contrasts with the brightness of the surrounding area. Most of the use of SAR images for oil spill detection is done by visual interpretation. Trained interpreters scan the image, and mark areas of low backscatter and where shape is a-symmetrical. It is very difficult to apply this method for a wide area. In contrast to visual interpretation, automatic detection algorithms were suggested and are mainly based on scanning dark formations, extracting features, and applying big data analysis. We propose a new algorithm that applies a nonlinear spatial filter that detects dark formations and is not susceptible to noises, such as internal or speckle. The advantages of this algorithm are both in run time and the results retrieved. The algorithm was tested in genesimulations as well as on COSMO-SkyMed images, detecting the Deep Horizon oil spill in the Gulf of Mexico (occurred on 20/4/2010). The simulation results show that even in a noisy environment, oil spill is detected. Applying the algorithm to the Deep Horizon oil spill, the algorithm classified the oil spill better than focusing on dark formation algorithm. Furthermore, the results were validated by the National Oceanic and Atmospheric Administration (NOAA) data.

  14. Self-development of visual space perception by learning from the hand

    NASA Astrophysics Data System (ADS)

    Chung, Jae-Moon; Ohnishi, Noboru

    1998-10-01

    Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.

  15. SimITK: visual programming of the ITK image-processing library within Simulink.

    PubMed

    Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin

    2014-04-01

    The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.

  16. The Brightness of Colour

    PubMed Central

    Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau

    2009-01-01

    Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398

  17. Clinical applications of textural analysis in non-small cell lung cancer.

    PubMed

    Phillips, Iain; Ajaz, Mazhar; Ezhil, Veni; Prakash, Vineet; Alobaidli, Sheaka; McQuaid, Sarah J; South, Christopher; Scuffham, James; Nisbet, Andrew; Evans, Philip

    2018-01-01

    Lung cancer is the leading cause of cancer mortality worldwide. Treatment pathways include regular cross-sectional imaging, generating large data sets which present intriguing possibilities for exploitation beyond standard visual interpretation. This additional data mining has been termed "radiomics" and includes semantic and agnostic approaches. Textural analysis (TA) is an example of the latter, and uses a range of mathematically derived features to describe an image or region of an image. Often TA is used to describe a suspected or known tumour. TA is an attractive tool as large existing image sets can be submitted to diverse techniques for data processing, presentation, interpretation and hypothesis testing with annotated clinical outcomes. There is a growing anthology of published data using different TA techniques to differentiate between benign and malignant lung nodules, differentiate tissue subtypes of lung cancer, prognosticate and predict outcome and treatment response, as well as predict treatment side effects and potentially aid radiotherapy planning. The aim of this systematic review is to summarize the current published data and understand the potential future role of TA in managing lung cancer.

  18. Re-Visioning Disability and Dyslexia down the Camera Lens: Interpretations of Representations on UK University Websites and in a UK Government Guidance Paper

    ERIC Educational Resources Information Center

    Collinson, Craig; Dunne, Linda; Woolhouse, Clare

    2012-01-01

    The focus of this article is to consider visual portrayals and representations of disability. The images selected for analysis came from online university prospectuses as well as a governmental guidance framework on the tuition of dyslexic students. Greater understanding, human rights and cultural change have been characteristic of much UK…

  19. Interpreting the Images in a Picture Book: Students Make Connections to Themselves, Their Lives and Experiences

    ERIC Educational Resources Information Center

    Mantei, Jessica; Kervin, Lisa

    2014-01-01

    Picture books are an important and accessible form of visual art for children because they offer, among other things, opportunities for making connections to personal experiences and to the values and beliefs of families and communities. This paper reports on the use of a picture book to promote Year 4 students' making of text-to-self connections,…

  20. From Elite Traditions to Middle-Class Cultures: Images of Secondary Education in the Anniversary Books of a Finnish Girls' School, 1882-2007

    ERIC Educational Resources Information Center

    Nieminen, Marjo

    2016-01-01

    This article concentrates on visual sources relating to secondary education, and asks how a collection of photographs can be understood and interpreted as part of the institutional and collective memory of one Finnish girls' school. The photographs were published in the anniversary books of the school. They construct an entirety, where public…

  1. Implementation of High-resolution Manometry in the Clinical Practice of Speech Language Pathology

    PubMed Central

    Thibeault, Susan; McCulloch, Timothy M.

    2014-01-01

    Visual imaging modalities, videofluoroscopic swallow study (VFSS) and fiberoptic endoscopic evaluation of swallow, for assessment of oropharyngeal dysphagia have been part of the speech language pathologist’s (SLPs) armamentarium for the diagnosis and treatment of dysphagia for decades. Recently, the addition of high-resolution manometry (HRM) has enabled the SLP to evaluate pharyngeal pressures and upper esophageal sphincter relaxation. Taken together, the use of visual imaging modalities with HRM can improve interpretation of swallowing physiology and facilitate more effective treatment planning. The goal of this article is to describe a clinical paradigm using HRM as an adjunct to VFSS, by the SLP, in the assessment of complex dysphagia. Moreover, in three cases described, the value of manometric measurements in elucidating swallowing imaging studies and documenting physiologic change in response to treatment is highlighted. As technology in this area is evolving, so will the clinical use of HRM by the SLP. Limitations of current HRM systems and applications are discussed. PMID:24233810

  2. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis

    PubMed Central

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-01-01

    Background The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. Results We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. Conclusion By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development. PMID:16740163

  3. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis.

    PubMed

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-06-01

    The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development.

  4. Localization of Diagnostically Relevant Regions of Interest in Whole Slide Images: a Comparative Study.

    PubMed

    Mercan, Ezgi; Aksoy, Selim; Shapiro, Linda G; Weaver, Donald L; Brunyé, Tad T; Elmore, Joann G

    2016-08-01

    Whole slide digital imaging technology enables researchers to study pathologists' interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists' actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors.

  5. Optical coherence tomography in anterior segment imaging

    PubMed Central

    Kalev-Landoy, Maya; Day, Alexander C.; Cordeiro, M. Francesca; Migdal, Clive

    2008-01-01

    Purpose To evaluate the ability of optical coherence tomography (OCT), designed primarily to image the posterior segment, to visualize the anterior chamber angle (ACA) in patients with different angle configurations. Methods In a prospective observational study, the anterior segments of 26 eyes of 26 patients were imaged using the Zeiss Stratus OCT, model 3000. Imaging of the anterior segment was achieved by adjusting the focusing control on the Stratus OCT. A total of 16 patients had abnormal angle configurations including narrow or closed angles and plateau irides, and 10 had normal angle configurations as determined by prior full ophthalmic examination, including slit-lamp biomicroscopy and gonioscopy. Results In all cases, OCT provided high-resolution information regarding iris configuration. The ACA itself was clearly visualized in patients with narrow or closed angles, but not in patients with open angles. Conclusions Stratus OCT offers a non-contact, convenient and rapid method of assessing the configuration of the anterior chamber. Despite its limitations, it may be of help during the routine clinical assessment and treatment of patients with glaucoma, particularly when gonioscopy is not possible or difficult to interpret. PMID:17355288

  6. Enhancing radiological volumes with symbolic anatomy using image fusion and collaborative virtual reality.

    PubMed

    Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L

    2004-01-01

    Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.

  7. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    NASA Astrophysics Data System (ADS)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-03-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  8. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers.

    PubMed

    Haring, Martijn T; Liv, Nalan; Zonnevylle, A Christiaan; Narvaez, Angela C; Voortman, Lenard M; Kruit, Pieter; Hoogenboom, Jacob P

    2017-03-02

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  9. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    PubMed Central

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-01-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673

  10. Development and Analysis of New 3D Tactile Materials for the Enhancement of STEM Education for the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Gonzales, Ashleigh

    Blind and visually impaired individuals have historically demonstrated a low participation in the fields of science, engineering, mathematics, and technology (STEM). This low participation is reflected in both their education and career choices. Despite the establishment of the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA), blind and visually impaired (BVI) students continue to academically fall below the level of their sighted peers in the areas of science and math. Although this deficit is created by many factors, this study focuses on the lack of adequate accessible image based materials. Traditional methods for creating accessible image materials for the vision impaired have included detailed verbal descriptions accompanying an image or conversion into a simplified tactile graphic. It is very common that no substitute materials will be provided to students within STEM courses because they are image rich disciplines and often include a large number images, diagrams and charts. Additionally, images that are translated into text or simplified into basic line drawings are frequently inadequate because they rely on the interpretations of resource personnel who do not have expertise in STEM. Within this study, a method to create a new type of tactile 3D image was developed using High Density Polyethylene (HDPE) and Computer Numeric Control (CNC) milling. These tactile image boards preserve high levels of detail when compared to the original print image. To determine the discernibility and effectiveness of tactile images, these customizable boards were tested in various university classrooms as well as in participation studies which included BVI and sighted students. Results from these studies indicate that tactile images are discernable and were found to improve performance in lab exercises as much as 60% for those with visual impairment. Incorporating tactile HDPE 3D images into a classroom setting was shown to increase the interest, participation and performance of BVI students suggesting that this type of 3D tactile image should be incorporated into STEM classes to increase the participation of these students and improve the level of training they receive in science and math.

  11. Transferring cognitive tasks between brain imaging modalities: implications for task design and results interpretation in FMRI studies.

    PubMed

    Warbrick, Tracy; Reske, Martina; Shah, N Jon

    2014-09-22

    As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.

  12. Physics of fractional imaging in biomedicine.

    PubMed

    Sohail, Ayesha; Bég, O A; Li, Zhiwu; Celik, Sebahattin

    2018-03-12

    The mathematics of imaging is a growing field of research and is evolving rapidly parallel to evolution in the field of imaging. Imaging, which is a sub-field of biomedical engineering, considers novel approaches to visualize biological tissues with the general goal of improving health. "Medical imaging research provides improved diagnostic tools in clinical settings and supports the development of drugs and other therapies. The data acquisition and diagnostic interpretation with minimum error are the important technical aspects of medical imaging. The image quality and resolution are really important in portraying the internal aspects of patient's body. Although there are several user friendly resources for processing image features, such as enhancement, colour manipulation and compression, the development of new processing methods is still worthy of efforts. In this article we aim to present the role of fractional calculus in imaging with the aid of practical examples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Identifying regions of interest in medical images using self-organizing maps.

    PubMed

    Teng, Wei-Guang; Chang, Ping-Lin

    2012-10-01

    Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.

  14. Information measures for terrain visualization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Xavier; Sima, Aleksandra A.; Feixas, Miquel; Buckley, Simon J.; Sbert, Mateu; Howell, John A.

    2017-02-01

    Many quantitative and qualitative studies in geoscience research are based on digital elevation models (DEMs) and 3D surfaces to aid understanding of natural and anthropogenically-influenced topography. As well as their quantitative uses, the visual representation of DEMs can add valuable information for identifying and interpreting topographic features. However, choice of viewpoints and rendering styles may not always be intuitive, especially when terrain data are augmented with digital image texture. In this paper, an information-theoretic framework for object understanding is applied to terrain visualization and terrain view selection. From a visibility channel between a set of viewpoints and the component polygons of a 3D terrain model, we obtain three polygonal information measures. These measures are used to visualize the information associated with each polygon of the terrain model. In order to enhance the perception of the terrain's shape, we explore the effect of combining the calculated information measures with the supplementary digital image texture. From polygonal information, we also introduce a method to select a set of representative views of the terrain model. Finally, we evaluate the behaviour of the proposed techniques using example datasets. A publicly available framework for both the visualization and the view selection of a terrain has been created in order to provide the possibility to analyse any terrain model.

  15. Visualization of postoperative anterior cruciate ligament reconstruction bone tunnels

    PubMed Central

    2011-01-01

    Background and purpose Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans, and a 3-dimensional (3D) virtual reality (VR) approach in visualizing and measuring ACL reconstruction bone tunnel placement. Methods 50 consecutive patients who underwent single-bundle ACL reconstructions were evaluated postoperatively by standard radiographs, CT scans, and 3D VR images. Tibial and femoral tunnel positions were measured by 2 observers using the traditional methods of Amis, Aglietti, Hoser, Stäubli, and the method of Benereau for the VR approach. Results The tunnel was visualized in 50–82% of the standard radiographs and in 100% of the CT scans and 3D VR images. Using the intraclass correlation coefficient (ICC), the inter- and intraobserver agreement was between 0.39 and 0.83 for the standard femoral and tibial radiographs. CT scans showed an ICC range of 0.49–0.76 for the inter- and intraobserver agreement. The agreement in 3D VR was almost perfect, with an ICC of 0.83 for the femur and 0.95 for the tibia. Interpretation CT scans and 3D VR images are more reliable in assessing postoperative bone tunnel placement following ACL reconstruction than standard radiographs. PMID:21999625

  16. Usefulness of tumor blood flow imaging by intraoperative indocyanine green videoangiography in hemangioblastoma surgery.

    PubMed

    Hojo, Masato; Arakawa, Yoshiki; Funaki, Takeshi; Yoshida, Kazumichi; Kikuchi, Takayuki; Takagi, Yasushi; Araki, Yoshio; Ishii, Akira; Kunieda, Takeharu; Takahashi, Jun C; Miyamoto, Susumu

    2014-01-01

    Hemangioblastomas remain a surgical challenge because of their arteriovenous malformation-like character. Recently, indocyanine green (ICG) videoangiography has been applied to neurosurgical vascular surgery. The aim of this study was to evaluate the usefulness of tumor blood flow imaging by intraoperative ICG videoangiography in surgery for hemangioblastomas. Twenty intraoperative ICG videoangiography procedures were performed in 12 patients with hemangioblastomas. Seven lesions were located in the cerebellum, two lesions were in the medulla oblongata, and three lesions were in the spinal cord. Ten procedures were performed before or during dissection, and 10 procedures were performed after tumor resection. ICG videoangiography could provide dynamic images of blood flow in the tumor and its related vessels under surgical view. Interpretation of these dynamic images of tumor blood flow was useful for discrimination of transit feeders (feeders en passage) and also for estimation of unexposed feeders covered with brain parenchyma. Postresection ICG videoangiography could confirm complete tumor resection and normalized blood flow in surrounding vessels. In surgery for hemangioblastomas, careful interpretation of dynamic ICG images can provide useful information on transit feeders and unexposed hidden vessels that cannot be directly visualized by ICG. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Constraints on the Geometry of the Farallon Slab from the Joint Interpretation of All Available Imaging Results from the Earthscope USArray Deployment in the Lower 48 States

    NASA Astrophysics Data System (ADS)

    Esker, A.; Pavlis, G. L.

    2017-12-01

    We assembled all available seismic tomography models distributed through the IRIS DMC and other sources. We combined these images with our own results using 3D plane wave migration of P to S conversion data derived from the USArray data set and other broadband seismic stations in the lower 48 states. All the tomography models were converted into SEGY format and interpolated onto a regular grid in a UTM reference frame. That innovation makes joint interpretation feasible using a seismic interpretation software (Petrel) because we treat both the tomography models and scattered wave image results as if they were 3D seismic reflection data. The careful designed interface of a modern exploration package makes exploring a range of interpretation packages much faster and allowed us to produce a more comprehensive interpretation of all available data. The tomography models are nearly an order of magnitude smoother than the scattered wave images, so we use the tomography models as a cross-validation in interpretation unless the scattered wave images are ambiguous. The focus of this study is testing a conjecture in an earlier paper (Pavlis, 2011) for the presence of a single continuous horizon interpreted as the top of the Farallon Slab. As in the previous paper we constrained the western edge of this surface with the location of Cascadia trench as well as a virtual edge from a back projection of the Mendocino triple junction using Pacific-North America motion over the past 30 Ma. We also simulated crustal multiple effects on the plane wave migration results using crustal geometry estimates produced by the Earthscope Automated Receiver Survey (EARS). This confirmed the scattered wave images were not reliable in the upper mantle at depths shallower than 200 km due to contamination by crustal multiples. Most tomography models show a steep dip in the slab immediately east of the volcanic arc and our surface follows the average geometry defined by a visual comparison of all the models. In eastern Oregon and northern Nevada the tomography models consistently show a general flattening of the slab over the 410 km discontinuity. A consistent horizon is observed in the most recent plane wave imaging and at we use that horizon to define the top of slab there. Our interpretations also confirmed a sharp increase in dip of the slab in eastern Wyoming and Montana.

  18. NASA/NOAA: Earth Science Electronic Theater 1999

    NASA Technical Reports Server (NTRS)

    Hasler, A. Fritz

    1999-01-01

    The Electronic Theater (E-theater) presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS. The visualizations are produced by the NASA Goddard Visualization and Analysis Laboratory (VAL/912), and Scientific Visualization Studio (SVS/930), as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the Earth Science E-Theater 1999 recently presented in Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York, and Dallas. The presentation Jan 11-14 at the AMS meeting in Dallas used a 4-CPU SGI/CRAY Onyx Infinite Reality Super Graphics Workstation with 8 GB RAM and a Terabyte Disk at 3840 X 1024 resolution with triple synchronized BarcoReality 9200 projectors on a 60ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense HyperImage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, HyperImage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing HyperImage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed image SpreadSheet (DISS). The DISS is being used as a high performance testbed Next Generation Internet (NGI) VisAnalysis of: 1) El Nino SSTs and NDVI response 2) Latest GOES 10 5-min rapid Scans of 26 day 5000 frame movie of March & April '98 weather and tornadic storms 3) TRMM rainfall and lightning 4)GOES 9 satellite images/winds and NOAA aircraft radar of hurricane Luis, 5) lightning detector data merged with GOES image sequences, 6) Japanese GMS, TRMM, & ADEOS data 7) Chinese FY2 data 8) Meteosat & ERS/ATSR data 9) synchronized manipulation of multiple 3D numerical model views; and others will be illustrated. The Image SpreadSheet has been highly successful in producing Earth science visualizations for public outreach. Many of these visualizations have been widely disseminated through the world wide web pages of the HPCC/LTP/RSD program which can be found at http://rsd.gsfc.nasa.gov/rsd The one min interval animations of Hurricane Luis on ABC Nightline and the color perspective rendering of Hurricane Fran published by TIME, LIFE, Newsweek, Popular Science, National Geographic, Scientific American, and the "Weekly Reader" are some of the examples which will be shown.

  19. Visualizing Tensions in an Ethnographic Moment: Images and Intersubjectivity.

    PubMed

    Crowder, Jerome W

    2017-01-01

    Images function as sources of data and influence our thinking about fieldwork, representation, and intersubjectivity. In this article, I show how both the ethnographic relationships and the working method of photography lead to a more nuanced understanding of a healing event. I systematically analyze 33 photographs made over a 15-minute period during the preparation and application of a poultice (topical cure) in a rural Andean home. The images chronicle the event, revealing my initial reaction and the decisions I made when tripping the shutter. By unpacking the relationship between ethnographer and subject, I reveal the constant negotiation of positions, assumptions, and expectations that make up intersubjectivity. For transparency, I provide thumbnails of all images, including metadata, so that readers may consider alternative interpretations of the images and event.

  20. Microscopic Imaging and Spectroscopy with Scattered Light

    PubMed Central

    Boustany, Nada N.; Boppart, Stephen A.; Backman, Vadim

    2012-01-01

    Optical contrast based on elastic scattering interactions between light and matter can be used to probe cellular structure and dynamics, and image tissue architecture. The quantitative nature and high sensitivity of light scattering signals to subtle alterations in tissue morphology, as well as the ability to visualize unstained tissue in vivo, has recently generated significant interest in optical scatter based biosensing and imaging. Here we review the fundamental methodologies used to acquire and interpret optical scatter data. We report on recent findings in this field and present current advances in optical scatter techniques and computational methods. Cellular and tissue data enabled by current advances in optical scatter spectroscopy and imaging stand to impact a variety of biomedical applications including clinical tissue diagnosis, in vivo imaging, drug discovery and basic cell biology. PMID:20617940

  1. Beating heart mitral valve repair with integrated ultrasound imaging

    NASA Astrophysics Data System (ADS)

    McLeod, A. Jonathan; Moore, John T.; Peters, Terry M.

    2015-03-01

    Beating heart valve therapies rely extensively on image guidance to treat patients who would be considered inoperable with conventional surgery. Mitral valve repair techniques including the MitrClip, NeoChord, and emerging transcatheter mitral valve replacement techniques rely on transesophageal echocardiography for guidance. These images are often difficult to interpret as the tool will cause shadowing artifacts that occlude tissue near the target site. Here, we integrate ultrasound imaging directly into the NeoChord device. This provides an unobstructed imaging plane that can visualize the valve lea ets as they are engaged by the device and can aid in achieving both a proper bite and spacing between the neochordae implants. A proof of concept user study in a phantom environment is performed to provide a proof of concept for this device.

  2. Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chen, Huaiguang; Fu, Shujun; Wang, Hong; Lv, Hongli; Zhang, Caiming

    2018-03-01

    As a high-resolution imaging mode of biological tissues and materials, optical coherence tomography (OCT) is widely used in medical diagnosis and analysis. However, OCT images are often degraded by annoying speckle noise inherent in its imaging process. Employing the bilateral sparse representation an adaptive singular value shrinking method is proposed for its highly sparse approximation of image data. Adopting the generalized likelihood ratio as similarity criterion for block matching and an adaptive feature-oriented backward projection strategy, the proposed algorithm can restore better underlying layered structures and details of the OCT image with effective speckle attenuation. The experimental results demonstrate that the proposed algorithm achieves a state-of-the-art despeckling performance in terms of both quantitative measurement and visual interpretation.

  3. Deep Filter Banks for Texture Recognition, Description, and Segmentation.

    PubMed

    Cimpoi, Mircea; Maji, Subhransu; Kokkinos, Iasonas; Vedaldi, Andrea

    Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.

  4. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  5. Rethinking Reader Response with Fifth Graders' Semiotic Interpretations

    ERIC Educational Resources Information Center

    Barone, Diane; Barone, Rebecca

    2017-01-01

    Fifth graders interpreted the book "Doll Bones" by Holly Black through visual representations from the beginning to the end of the book. Each visual representation was analyzed to determine how students responded. Most frequently, they moved to inferential ways of understanding. Students often visually interpreted emotional plot elements…

  6. Aviation spatial orientation in relationship to head position and attitude interpretation.

    PubMed

    Patterson, F R; Cacioppo, A J; Gallimore, J J; Hinman, G E; Nalepka, J P

    1997-06-01

    Conventional wisdom describing aviation spatial awareness assumes that pilots view a moving horizon through the windscreen. This assumption presupposes head alignment with the cockpit "Z" axis during both visual (VMC) and instrument (IMC) maneuvers. Even though this visual paradigm is widely accepted, its accuracy has not been verified. The purpose of this research was to determine if a visually induced neck reflex causes pilots to align their heads toward the horizon, rather than the cockpit vertical axis. Based on literature describing reflexive head orientation in terrestrial environments it was hypothesized that during simulated VMC aircraft maneuvers, pilots would align their heads toward the horizon. Some 14 military pilots completed two simulated flights in a stationary dome simulator. The flight profile consisted of five separate tasks, four of which evaluated head tilt during exposure to unique visual conditions and one examined occurrences of disorientation during unusual attitude recovery. During simulated visual flight maneuvers, pilots tilted their heads toward the horizon (p < 0.0001). Under IMC, pilots maintained head alignment with the vertical axis of the aircraft. During VMC maneuvers pilots reflexively tilt their heads toward the horizon, away from the Gz axis of the cockpit. Presumably, this behavior stabilizes the retinal image of the horizon (1 degree visual-spatial cue), against which peripheral images of the cockpit (2 degrees visual-spatial cue) appear to move. Spatial disorientation, airsickness, and control reversal error may be related to shifts in visual-vestibular sensory alignment during visual transitions between VMC (head tilt) and IMC (Gz head stabilized) conditions.

  7. In vivo time-gated fluorescence imaging with biodegradable luminescent porous silicon nanoparticles.

    PubMed

    Gu, Luo; Hall, David J; Qin, Zhengtao; Anglin, Emily; Joo, Jinmyoung; Mooney, David J; Howell, Stephen B; Sailor, Michael J

    2013-01-01

    Fluorescence imaging is one of the most versatile and widely used visualization methods in biomedical research. However, tissue autofluorescence is a major obstacle confounding interpretation of in vivo fluorescence images. The unusually long emission lifetime (5-13 μs) of photoluminescent porous silicon nanoparticles can allow the time-gated imaging of tissues in vivo, completely eliminating shorter-lived (<10 ns) emission signals from organic chromophores or tissue autofluorescence. Here using a conventional animal imaging system not optimized for such long-lived excited states, we demonstrate improvement of signal to background contrast ratio by >50-fold in vitro and by >20-fold in vivo when imaging porous silicon nanoparticles. Time-gated imaging of porous silicon nanoparticles accumulated in a human ovarian cancer xenograft following intravenous injection is demonstrated in a live mouse. The potential for multiplexing of images in the time domain by using separate porous silicon nanoparticles engineered with different excited state lifetimes is discussed.

  8. Evaluation of areas prepared for planting using LANDSAT data. M.S. Thesis; [Ribeirao Preto, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Duarte, V.

    1983-01-01

    Three different algorithms (SINGLE-CELL, MAXVER and MEDIA K) were used to automatically interpret data from LANDSAT observations of an area of Ribeirao Preto, Brazil. Photographic transparencies were obtained, projected and visually interpreted. The results show that: (1) the MAXVER algorithm presented a better classification performance; (2) verification of the changes in cultivated areas using data from the three different acquisition dates was possible; (3) the water bodies, degraded lands, urban areas, and fallow fields were frequently mistaken by cultivated soils; and (4) the use of projected photographic transparencies furnished satisfactory results, besides reducing the time spent on the image-100 system.

  9. Anomaly clustering in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.

    2009-05-01

    The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.

  10. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    PubMed

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.

  11. Dissociable effects of inter-stimulus interval and presentation duration on rapid face categorization.

    PubMed

    Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno

    2018-04-01

    Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Ontology-based image navigation: exploring 3.0-T MR neurography of the brachial plexus using AIM and RadLex.

    PubMed

    Wang, Kenneth C; Salunkhe, Aditya R; Morrison, James J; Lee, Pearlene P; Mejino, José L V; Detwiler, Landon T; Brinkley, James F; Siegel, Eliot L; Rubin, Daniel L; Carrino, John A

    2015-01-01

    Disorders of the peripheral nervous system have traditionally been evaluated using clinical history, physical examination, and electrodiagnostic testing. In selected cases, imaging modalities such as magnetic resonance (MR) neurography may help further localize or characterize abnormalities associated with peripheral neuropathies, and the clinical importance of such techniques is increasing. However, MR image interpretation with respect to peripheral nerve anatomy and disease often presents a diagnostic challenge because the relevant knowledge base remains relatively specialized. Using the radiology knowledge resource RadLex®, a series of RadLex queries, the Annotation and Image Markup standard for image annotation, and a Web services-based software architecture, the authors developed an application that allows ontology-assisted image navigation. The application provides an image browsing interface, allowing users to visually inspect the imaging appearance of anatomic structures. By interacting directly with the images, users can access additional structure-related information that is derived from RadLex (eg, muscle innervation, muscle attachment sites). These data also serve as conceptual links to navigate from one portion of the imaging atlas to another. With 3.0-T MR neurography of the brachial plexus as the initial area of interest, the resulting application provides support to radiologists in the image interpretation process by allowing efficient exploration of the MR imaging appearance of relevant nerve segments, muscles, bone structures, vascular landmarks, anatomic spaces, and entrapment sites, and the investigation of neuromuscular relationships. RSNA, 2015

  13. Ontology-based Image Navigation: Exploring 3.0-T MR Neurography of the Brachial Plexus Using AIM and RadLex

    PubMed Central

    Salunkhe, Aditya R.; Morrison, James J.; Lee, Pearlene P.; Mejino, José L. V.; Detwiler, Landon T.; Brinkley, James F.; Siegel, Eliot L.; Rubin, Daniel L.; Carrino, John A.

    2015-01-01

    Disorders of the peripheral nervous system have traditionally been evaluated using clinical history, physical examination, and electrodiagnostic testing. In selected cases, imaging modalities such as magnetic resonance (MR) neurography may help further localize or characterize abnormalities associated with peripheral neuropathies, and the clinical importance of such techniques is increasing. However, MR image interpretation with respect to peripheral nerve anatomy and disease often presents a diagnostic challenge because the relevant knowledge base remains relatively specialized. Using the radiology knowledge resource RadLex®, a series of RadLex queries, the Annotation and Image Markup standard for image annotation, and a Web services–based software architecture, the authors developed an application that allows ontology-assisted image navigation. The application provides an image browsing interface, allowing users to visually inspect the imaging appearance of anatomic structures. By interacting directly with the images, users can access additional structure-related information that is derived from RadLex (eg, muscle innervation, muscle attachment sites). These data also serve as conceptual links to navigate from one portion of the imaging atlas to another. With 3.0-T MR neurography of the brachial plexus as the initial area of interest, the resulting application provides support to radiologists in the image interpretation process by allowing efficient exploration of the MR imaging appearance of relevant nerve segments, muscles, bone structures, vascular landmarks, anatomic spaces, and entrapment sites, and the investigation of neuromuscular relationships. ©RSNA, 2015 PMID:25590394

  14. [Myocardial perfusion scintigraphy - short form of the German guideline].

    PubMed

    Lindner, O; Burchert, W; Hacker, M; Schaefer, W; Schmidt, M; Schober, O; Schwaiger, M; vom Dahl, J; Zimmermann, R; Schäfers, M

    2013-01-01

    This guideline is a short summary of the guideline for myocardial perfusion scintigraphy published by the Association of the Scientific Medical Societies in Ger-many (AWMF). The purpose of this guideline is to provide practical assistance for indication and examination procedures as well as image analysis and to present the state-of-the-art of myocardial-perfusion-scintigraphy. After a short introduction on the fundamentals of imaging, precise and detailed information is given on the indications, patient preparation, stress testing, radiopharmaceuticals, examination protocols and techniques, radiation exposure, data reconstruction as well as information on visual and quantitative image analysis and interpretation. In addition possible pitfalls, artefacts and key elements of reporting are described.

  15. Dissolution-Enlarged Fractures Imaging Using Electrical Resistivity Tomography (ERT)

    NASA Astrophysics Data System (ADS)

    Siami-Irdemoosa, Elnaz

    In recent years the electrical imaging techniques have been largely applied to geotechnical and environmental investigations. These techniques have proven to be the best geophysical methods for site investigations in karst terrain, particularly when the overburden soil is clay-dominated. Karst is terrain with a special landscape and distinctive hydrological system developed by dissolution of rocks, particularly carbonate rocks such as limestone and dolomite, made by enlarging fractures into underground conduits that can enlarge into caverns, and in some cases collapse to form sinkholes. Bedding planes, joints, and faults are the principal structural guides for underground flow and dissolution in almost all karstified rocks. Despite the important role of fractures in karst development, the geometry of dissolution-enlarged fractures remain poorly unknown. These features are characterized by an strong contrast with the surrounding formations in terms of physical properties, such as electrical resistivity. Electrical resistivity tomography (ERT) was used as the primary geophysical tool to image the subsurface in a karst terrain in Greene County, Missouri. Pattern, orientation and density of the joint sets were interpreted from ERT data in the investigation site. The Multi-channel Analysis of Surface Wave (MASW) method and coring were employed to validate the interpretation results. Two sets of orthogonal visually prominent joints have been identified in the investigation site: north-south trending joint sets and west-east trending joint sets. However, most of the visually prominent joint sets are associated with either cultural features that concentrate runoff, natural surface drainage features or natural surface drainage.

  16. The Role of Visual "Literacy" in Film Communication.

    ERIC Educational Resources Information Center

    Messaris, Paul

    The term "visual literacy" generally refers to the interpretation of the formal structure of film or television and carries with it the notion that the interpreter has knowledge of the use of camera angles, lighting, flashbacks, and so forth. However, many visual conventions encountered in movies or television may be interpreted even by…

  17. A Comparison of Rapid-Scanning X-Ray Fluorescence Mapping And Magnetic Resonance Imaging to Localize Brain Iron Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrea, R.P.E.; Harder, S.L.; Martin, M.

    2009-05-26

    The clinical diagnosis of many neurodegenerative disorders relies primarily or exclusively on observed behaviors rather than measurable physical tests. One of the hallmarks of Alzheimer disease (AD) is the presence of amyloid-containing plaques associated with deposits of iron, copper and/or zinc. Work in other laboratories has shown that iron-rich plaques can be seen in the mouse brain in vivo with magnetic resonance imaging (MRI) using a high-field strength magnet but this iron cannot be visualized in humans using clinical magnets. To improve the interpretation of MRI, we correlated iron accumulation visualized by X-ray fluorescence spectroscopy, an element-specific technique with T1,more » T2, and susceptibility weighted MR (SWI) in a mouse model of AD. We show that SWI best shows areas of increased iron accumulation when compared to standard sequences.« less

  18. Predicting Visual Semantic Descriptive Terms from Radiological Image Data: Preliminary Results with Liver Lesions in CT

    PubMed Central

    Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.

    2014-01-01

    We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using high–order steerable Riesz wavelets and support vector machines (SVM). The organization of scales and directions that are specific to every VST are modeled as linear combinations of directional Riesz wavelets. The models obtained are steerable, which means that any orientation of the model can be synthesized from linear combinations of the basis filters. The latter property is leveraged to model VST independently from their local orientation. In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a non–hierarchical computationally–derived ontology of VST containing inter–term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave–one–patient–out cross–validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST when using SVMs in a feature space combining the magnitudes of the steered models with CT intensities. Likelihood maps are created for each VST, which enables high transparency of the information modeled. The computationally–derived ontology obtained from the VST models was found to be consistent with the underlying semantics of the visual terms. It was found to be complementary to the RadLex ontology, and constitutes a potential method to link the image content to visual semantics. The proposed framework is expected to foster human–computer synergies for the interpretation of radiological images while using rotation–covariant computational models of VSTs to (1) quantify their local likelihood and (2) explicitly link them with pixel–based image content in the context of a given imaging domain. PMID:24808406

  19. Usefulness of myocardial parametric imaging to evaluate myocardial viability in experimental and in clinical studies.

    PubMed

    Korosoglou, G; Hansen, A; Bekeredjian, R; Filusch, A; Hardt, S; Wolf, D; Schellberg, D; Katus, H A; Kuecherer, H

    2006-03-01

    To evaluate whether myocardial parametric imaging (MPI) is superior to visual assessment for the evaluation of myocardial viability. Myocardial contrast echocardiography (MCE) was assessed in 11 pigs before, during, and after left anterior descending coronary artery occlusion and in 32 patients with ischaemic heart disease by using intravenous SonoVue administration. In experimental studies perfusion defect area assessment by MPI was compared with visually guided perfusion defect planimetry. Histological assessment of necrotic tissue was the standard reference. In clinical studies viability was assessed on a segmental level by (1) visual analysis of myocardial opacification; (2) quantitative estimation of myocardial blood flow in regions of interest; and (3) MPI. Functional recovery between three and six months after revascularisation was the standard reference. In experimental studies, compared with visually guided perfusion defect planimetry, planimetric assessment of infarct size by MPI correlated more significantly with histology (r2 = 0.92 versus r2 = 0.56) and had a lower intraobserver variability (4% v 15%, p < 0.05). In clinical studies, MPI had higher specificity (66% v 43%, p < 0.05) than visual MCE and good accuracy (81%) for viability detection. It was less time consuming (3.4 (1.6) v 9.2 (2.4) minutes per image, p < 0.05) than quantitative blood flow estimation by regions of interest and increased the agreement between observers interpreting myocardial perfusion (kappa = 0.87 v kappa = 0.75, p < 0.05). MPI is useful for the evaluation of myocardial viability both in animals and in patients. It is less time consuming than quantification analysis by regions of interest and less observer dependent than visual analysis. Thus, strategies incorporating this technique may be valuable for the evaluation of myocardial viability in clinical routine.

  20. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  1. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  2. Three-dimensional Talairach-Tournoux brain atlas

    NASA Astrophysics Data System (ADS)

    Fang, Anthony; Nowinski, Wieslaw L.; Nguyen, Bonnie T.; Bryan, R. Nick

    1995-04-01

    The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.

  3. Differences in neural responses to ipsilateral stimuli in wide-view fields between face- and house-selective areas

    PubMed Central

    Li, Ting; Niu, Yan; Xiang, Jie; Cheng, Junjie; Liu, Bo; Zhang, Hui; Yan, Tianyi; Kanazawa, Susumu; Wu, Jinglong

    2018-01-01

    Category-selective brain areas exhibit varying levels of neural activity to ipsilaterally presented stimuli. However, in face- and house-selective areas, the neural responses evoked by ipsilateral stimuli in the peripheral visual field remain unclear. In this study, we displayed face and house images using a wide-view visual presentation system while performing functional magnetic resonance imaging (fMRI). The face-selective areas (fusiform face area (FFA) and occipital face area (OFA)) exhibited intense neural responses to ipsilaterally presented images, whereas the house-selective areas (parahippocampal place area (PPA) and transverse occipital sulcus (TOS)) exhibited substantially smaller and even negative neural responses to the ipsilaterally presented images. We also found that the category preferences of the contralateral and ipsilateral neural responses were similar. Interestingly, the face- and house-selective areas exhibited neural responses to ipsilateral images that were smaller than the responses to the contralateral images. Multi-voxel pattern analysis (MVPA) was implemented to evaluate the difference between the contralateral and ipsilateral responses. The classification accuracies were much greater than those expected by chance. The classification accuracies in the FFA were smaller than those in the PPA and TOS. The closer eccentricities elicited greater classification accuracies in the PPA and TOS. We propose that these ipsilateral neural responses might be interpreted by interhemispheric communication through intrahemispheric connectivity of white matter connection and interhemispheric connectivity via the corpus callosum and occipital white matter connection. Furthermore, the PPA and TOS likely have weaker interhemispheric communication than the FFA and OFA, particularly in the peripheral visual field. PMID:29451872

  4. Wide-Field Fundus Autofluorescence for Retinitis Pigmentosa and Cone/Cone-Rod Dystrophy.

    PubMed

    Oishi, Akio; Oishi, Maho; Ogino, Ken; Morooka, Satoshi; Yoshimura, Nagahisa

    2016-01-01

    Retinitis pigmentosa and cone/cone-rod dystrophy are inherited retinal diseases characterized by the progressive loss of rod and/or cone photoreceptors. To evaluate the status of rod/cone photoreceptors and visual function, visual acuity and visual field tests, electroretinogram, and optical coherence tomography are typically used. In addition to these examinations, fundus autofluorescence (FAF) has recently garnered attention. FAF visualizes the intrinsic fluorescent material in the retina, which is mainly lipofuscin contained within the retinal pigment epithelium. While conventional devices offer limited viewing angles in FAF, the recently developed Optos machine enables recording of wide-field FAF. With wide-field analysis, an association between abnormal FAF areas and visual function was demonstrated in retinitis pigmentosa and cone-rod dystrophy. In addition, the presence of "patchy" hypoautofluorescent areas was found to be correlated with symptom duration. Although physicians should be cautious when interpreting wide-field FAF results because the peripheral parts of the image are magnified significantly, this examination method provides previously unavailable information.

  5. Defining intrahepatic biliary anatomy in living liver transplant donor candidates at mangafodipir trisodium-enhanced MR cholangiography versus conventional T2-weighted MR cholangiography.

    PubMed

    Lee, Vivian S; Krinsky, Glenn A; Nazzaro, Carol A; Chang, Jerry S; Babb, James S; Lin, Jennifer C; Morgan, Glyn R; Teperman, Lewis W

    2004-12-01

    To compare three-dimensional (3D) mangafodipir trisodium-enhanced T1-weighted magnetic resonance (MR) cholangiography with conventional T2-weighted MR cholangiography for depiction and definition of intrahepatic biliary anatomy in liver transplant donor candidates. One hundred eight healthy liver transplant donor candidates were examined with two MR cholangiographic methods. All candidates gave written informed consent, and the study was approved by the institutional review board. First, breath-hold transverse and coronal half-Fourier single-shot turbo spin-echo and breath-hold oblique coronal heavily T2-weighted turbo spin-echo sequences were performed. Second, mangafodipir trisodium-enhanced breath-hold fat-suppressed 3D gradient-echo sequences were performed through the ducts (oblique coronal plane) and through the entire liver (transverse plane). Interpretation of biliary anatomy findings, particularly variants affecting right liver lobe biliary drainage, and degree of interpretation confidence at both 3D mangafodipir trisodium-enhanced MR cholangiography and T2-weighted MR cholangiography were recorded and compared by using the Wilcoxon signed rank test. Then, consensus interpretations of both MR image sets together were performed. Intraoperative cholangiography was the reference-standard examination for 51 subjects who underwent right lobe hepatectomy. The McNemar test was used to compare the accuracies of the individual MR techniques with that of the consensus interpretation of both image sets together and to compare each technique with intraoperative cholangiography. Biliary anatomy was visualized with mangafodipir trisodium enhancement in all patients. Mangafodipir trisodium-enhanced image findings agreed with findings seen at combined interpretations significantly more often than did T2-weighted image findings (in 107 [99%] vs 88 [82%] of 108 donor candidates, P < .001). Confidence was significantly higher with the mangafodipir trisodium-enhanced images than with the T2-weighted images (mean confidence score, 4.5 vs 3.4; P < .001). In the 51 candidates who underwent intraoperative cholangiography, mangafodipir trisodium-enhanced imaging correctly depicted the biliary anatomy more often than did T2-weighted imaging (in 47 [92%] vs 43 [84%] donor candidates, P = .14), whereas the two MR imaging techniques combined correctly depicted the anatomy in 48 (94%) candidates. Mangafodipir trisodium-enhanced 3D MR cholangiography depicts intrahepatic biliary anatomy, especially right duct variants, more accurately than does conventional T2-weighted MR cholangiography. (c) RSNA, 2004.

  6. Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images

    PubMed Central

    Zabala-Travers, Silvina; Choi, Mina; Cheng, Wei-Chung

    2015-01-01

    Purpose: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. Methods: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow (“jet”), a heated black-body (“hot”), and a gray (“gray”) color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. Results: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. Conclusions: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images. PMID:26127048

  7. Managed Clearings: an Unaccounted Land-cover in Urbanizing Regions

    NASA Astrophysics Data System (ADS)

    Singh, K. K.; Madden, M.; Meentemeyer, R. K.

    2016-12-01

    Managed clearings (MC), such as lawns, public parks and grassy transportation medians, are a common and ecologically important land cover type in urbanizing regions, especially those characterized by sprawl. We hypothesize that MC is underrepresented in land cover classification schemes and data products such as NLCD (National Land Cover Database) data, which may impact environmental assessments and models of urban ecosystems. We visually interpreted and mapped fine scale land cover with special attention to MC using 2012 NAIP (National Agriculture Imagery Program) images and compared the output with NLCD data. Areas sampled were 50 randomly distributed 1*1km blocks of land in three cities of the Char-lanta mega-region (Atlanta, Charlotte, and Raleigh). We estimated the abundance of MC relative to other land cover types, and the proportion of land-cover types in NLCD data that are similar to MC. We also assessed if the designations of recreation, transportation, and utility in MC inform the problem differently than simply tallying MC as a whole. 610 ground points, collected using the Google Earth, were used to evaluate accuracy of NLCD data and visual interpretation for consistency. Overall accuracy of visual interpretation and NLCD data was 78% and 58%, respectively. NLCD data underestimated forest and MC by 14.4km2 and 6.4km2, respectively, while overestimated impervious surfaces by 10.2km2 compared to visual interpretation. MC was the second most dominant land cover after forest (40.5%) as it covered about 28% of the total area and about 13% higher than impervious surfaces. Results also suggested that recreation in MC constitutes up to 90% of area followed by transportation and utility. Due to the prevalence of MC in urbanizing regions, the addition of MC to the synthesis of land-cover data can help delineate realistic cover types and area proportions that could inform ecologic/hydrologic models, and allow for accurate prediction of ecological phenomena.

  8. What Do Geoscience Experts and Novices Look At and What Do They See When Viewing and Interpreting Data Visualizations?

    NASA Astrophysics Data System (ADS)

    Kastens, K. A.; Shipley, T. F.; Boone, A.

    2012-12-01

    When geoscience experts look at data visualizations, they can "see" structures, and processes and traces of Earth history. When students look at those same visualizations, they may see only blotches of color, dots or squiggles. What are those experts doing, and how can students learn to do the same? We report on a study in which experts (>10 years of geoscience research experience) and novices (undergrad psychology students) examine shaded-relief/color-coded images of topography/bathymetry, while answering questions aloud and being eye-tracked. Images were a global map, two high-res images of continental terrain and two of oceanic terrain, with hi-res localities chosen to display distinctive traces of important earth processes. The differences in what they look at as recorded by eye-tracking are relatively subtle. On the global image, novices tend to focus on continents, whereas experts distribute their attention more evenly across continents and oceans. Experts universally access the available scale information (distance scale, lat/long axes), whereas most students do not. Novices do attend substantially and spontaneously to the salient geomorphological features in the high-res images: seamounts, mid-ocean ridge/transform intersection, erosional river channels, and compressional ridges and valley system. The more marked differences come in what respondents see, as captured in video recordings of their words and gestures in response to experimenter's questions. When their attention is directed to a small and distinctive part of a high-res image and they are asked to "….describe what you see…", experts typically produce richly detailed descriptions that may include the regional depth/altitude, local relief, shape and spatial distribution of major features, symmetry or lack thereof, cross-cutting relationships, presence of lineations and their orientations, and similar geomorphological details. Following or interwoven with these rich descriptions, some experts also offer interpretations of causal Earth processes. We identified four types of novice answers: (a) "flat" answers, in which the student describes the patches of color on the screen with no mention of shape or relief; (b) "thing" answers, in which the student mentions an inappropriate object, such as "the Great Wall of China," (c) geomorphology answers, in which the student talks about depth/altitude, relief, or shapes of landforms, and (d) process answers, in which student talks about earth processes, such as earthquakes, erosion, or plate tectonics. Novice "geomorphology" (c) answers resemble expert responses, but lack the rich descriptive detail. The "process" (d) category includes many interpretations that lack any grounding in the evidentiary base available in the viewed data. These findings suggest that instruction around earth data should include an emphasis on thoroughly and accurately describing the features that are present in the data--a skill that our experts display and our novices mostly lack. It is unclear, though, how best to sequence the teaching of descriptive and interpretive skills, since the experts' attention to empirical features in the data is steered by their knowledge of which features have causal significance.

  9. Applications of Sentinel-2 data for agriculture and forest monitoring using the absolute difference (ZABUD) index derived from the AgroEye software (ESA)

    NASA Astrophysics Data System (ADS)

    de Kok, R.; WeŻyk, P.; PapieŻ, M.; Migo, L.

    2017-10-01

    To convince new users of the advantages of the Sentinel_2 sensor, a simplification of classic remote sensing tools allows to create a platform of communication among domain specialists of agricultural analysis, visual image interpreters and remote sensing programmers. An index value, known in the remote sensing user domain as "Zabud" was selected to represent, in color, the essentials of a time series analysis. The color index used in a color atlas offers a working platform for an agricultural field control. This creates a database of test and training areas that enables rapid anomaly detection in the agricultural domain. The use cases and simplifications now function as an introduction to Sentinel_2 based remote sensing, in an area that before relies on VHR imagery and aerial data, to serve mainly the visual interpretation. The database extension with detected anomalies allows developers of open source software to design solutions for further agricultural control with remote sensing.

  10. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  11. Panoramic autofluorescence: highlighting retinal pathology.

    PubMed

    Slotnick, Samantha; Sherman, Jerome

    2012-05-01

    Recent technological advances in fundus autofluorescence (FAF) are providing new opportunities for insight into retinal physiology and pathophysiology. FAF provides distinctly different imaging information than standard photography or color separation. A review of the basis for this imaging technology is included to help the clinician understand how to interpret FAF images. Cases are presented to illustrate image interpretation. Optos, which manufactures equipment for simultaneous panoramic imaging, has recently outfitted several units with AF capabilities. Six cases are presented in which panoramic autofluorescent (PAF) images highlight retinal pathology, using Optos' Ultra-Widefield technology. Supportive imaging technologies, such as Optomap® images and spectral domain optical coherence tomography (SD-OCT), are used to assist in the clinical interpretation of retinal pathology detected on PAF. Hypofluorescent regions on FAF are identified to occur along with a disruption in the photoreceptors and/or retinal pigment epithelium, as borne out on SD-OCT. Hyperfluorescent regions on FAF occur at the advancing zones of retinal degeneration, indicating impending damage. PAF enables such inferences to be made in retinal areas which lie beyond the reach of SD-OCT imaging. PAF also enhances clinical pattern recognition over a large area and in comparison with the fellow eye. Symmetric retinal degenerations often occur with genetic conditions, such as retinitis pigmentosa, and may impel the clinician to recommend genetic testing. Autofluorescent ophthalmoscopy is a non-invasive procedure that can detect changes in metabolic activity at the retinal pigment epithelium before clinical ophthalmoscopy. Already, AF is being used as an adjunct technology to fluorescein angiography in cases of age-related macular degeneration. Both hyper- and hypoautofluorescent changes are indicative of pathology. Peripheral retinal abnormalities may precede central retinal impacts, potentially providing early signs for intervention before impacting visual acuity. The panoramic image enhances clinical pattern recognition over a large area and in comparison between eyes. Optos' Ultra-Widefield technology is capable of capturing high-resolution images of the peripheral retina without requiring dilation.

  12. Oblique synoptic images, produced from digital data, display strong evidence of a "new" caldera in southwestern Guatemala

    USGS Publications Warehouse

    Duffield, W.; Heiken, G.; Foley, D.; McEwen, A.

    1993-01-01

    The synoptic view of broad regions of the Earth's surface as displayed in Landsat and other satellite images has greatly aided in the recognition of calderas, ignimbrite plateaus and other geologic landforms. Remote-sensing images that include visual representation of depth are an even more powerful tool for geologic interpretation of landscapes, but their use has been largely restricted to the exploration of planets other than Earth. By combining Landsat images with digitized topography, we have generated regional oblique views that display compelling evidence for a previously undocumented late-Cenozoic caldera within the active volcanic zone of southwestern Guatemala. This "new" caldera, herein called Xela, is a depression about 30 km wide and 400-600 m deep, which includes the Quezaltenango basin. The caldera depression is breached only by a single river canyon. The caldera outline is broadly circular, but a locally scalloped form suggests the occurrence of multiple caldera-collapse events, or local slumping of steep caldera walls, or both. Within its northern part, Xela caldera contains a toreva block, about 500 m high and 2 km long, that may be incompletely foundered pre-caldera bedrock. Xela contains several post-caldera volcanoes, some of which are active. A Bouguer gravity low, tens of milligals in amplitude, is approximately co-located with the proposed caldera. The oblique images also display an extensive plateau that dips about 2?? away from the north margin of Xela caldera. We interpret this landform to be underlain by pyroclastic outflow from Xela and nearby Atitla??n calderas. Field mapping by others has documented a voluminous rhyolitic pumiceous fallout deposit immediately east of Xela caldera. We speculate that Xela caldera was the source of this deposit. If so, the age of at least part of the caldera is between about 84 ka and 126 ka, the ages of deposits that stratigraphically bracket this fallout. Most of the floor of Xela caldera is covered with Los Chocoyos pyroclastics, 84-ka deposits erupted from Atitla??n caldera. Oblique images produced from digital data are unique tools that can greatly facilitate initial geologic interpretation of morphologically young volcanic (and other) terrains where field access is limited, especially because conventional visual representations commonly lack depth perspective and may cover only part of the region of interest. ?? 1993.

  13. THE EFFECT OF PHOTOPIGMENT BLEACHING ON FUNDUS AUTOFLUORESCENCE IN ACUTE CENTRAL SEROUS CHORIORETINOPATHY.

    PubMed

    Choi, Kwang-Eon; Yun, Cheolmin; Kim, Young-Ho; Kim, Seong-Woo; Oh, Jaeryung; Huh, Kuhl

    2017-03-01

    To evaluate the effect of photobleaching on fundus autofluorescence (FAF) images in acute central serous chorioretinopathy. We obtained prephotobleaching and postphotobleaching images using an Optomap 200Tx, and photobleaching was induced with a Heidelberg Retina Angiograph 2. Degrees of photobleaching were assessed as grayscale values in Optomap images. Concordances among the three kinds of images were analyzed. Hyper-AF lesions in prephotobleaching images were classified as Type 1 (changed to normal-AF after photobleaching) and Type 2 (unchanged after photobleaching). The FAF composite patterns of central serous chorioretinopathy lesions were classified as diffuse or mottled. Initial and final best-corrected visual acuity, central retinal thickness, and disease duration were compared according to fovea FAF type. Forty-one eyes of 41 patients were analyzed. The lesion brightness of postphotobleaching Optomap FAF showed greater concordance with Heidelberg Retina Angiograph 2 FAF (94.74%) than the prephotobleaching Optomap FAF (80.49%). Eyes with Type 1 fovea had greater initial and final best-corrected visual acuity (20/23 vs. 20/41, 20/21 vs. 20/32, P < 0.0001, P = 0.001, respectively) and shorter disease duration (19.68 ± 12.98 vs. 51.55 ± 44.98 days, P = 0.043) than those with Type 2 fovea. However, eyes with diffuse Type 2 fovea had only lower initial and final best-corrected visual acuity (20/23 vs. 20/45, 20/21 vs. 20/36, P < 0.0001, P < 0.0001, respectively) than those with Type 1 fovea. Understanding the photobleaching effect is necessary for the accurate interpretation of FAF images. Furthermore, comparing prephotobleaching and postphotobleaching FAF images may be helpful for estimation of lesion status in central serous chorioretinopathy.

  14. A framework for farmland parcels extraction based on image classification

    NASA Astrophysics Data System (ADS)

    Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan

    2018-03-01

    It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.

  15. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  16. Control of Wind Tunnel Operations Using Neural Net Interpretation of Flow Visualization Records

    NASA Technical Reports Server (NTRS)

    Buggele, Alvin E.; Decker, Arthur J.

    1994-01-01

    Neural net control of operations in a small subsonic/transonic/supersonic wind tunnel at Lewis Research Center is discussed. The tunnel and the layout for neural net control or control by other parallel processing techniques are described. The tunnel is an affordable, multiuser platform for testing instrumentation and components, as well as parallel processing and control strategies. Neural nets have already been tested on archival schlieren and holographic visualizations from this tunnel as well as recent supersonic and transonic shadowgraph. This paper discusses the performance of neural nets for interpreting shadowgraph images in connection with a recent exercise for tuning the tunnel in a subsonic/transonic cascade mode of operation. That mode was operated for performing wake surveys in connection with NASA's Advanced Subsonic Technology (AST) noise reduction program. The shadowgraph was presented to the neural nets as 60 by 60 pixel arrays. The outputs were tunnel parameters such as valve settings or tunnel state identifiers for selected tunnel operating points, conditions, or states. The neural nets were very sensitive, perhaps too sensitive, to shadowgraph pattern detail. However, the nets exhibited good immunity to variations in brightness, to noise, and to changes in contrast. The nets are fast enough so that ten or more can be combined per control operation to interpret flow visualization data, point sensor data, and model calculations. The pattern sensitivity of the nets will be utilized and tested to control wind tunnel operations at Mach 2.0 based on shock wave patterns.

  17. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  18. Stereoscopic augmented reality using ultrasound volume rendering for laparoscopic surgery in children

    NASA Astrophysics Data System (ADS)

    Oh, Jihun; Kang, Xin; Wilson, Emmanuel; Peters, Craig A.; Kane, Timothy D.; Shekhar, Raj

    2014-03-01

    In laparoscopic surgery, live video provides visualization of the exposed organ surfaces in the surgical field, but is unable to show internal structures beneath those surfaces. The laparoscopic ultrasound is often used to visualize the internal structures, but its use is limited to intermittent confirmation because of the need for an extra hand to maneuver the ultrasound probe. Other limitations of using ultrasound are the difficulty of interpretation and the need for an extra port. The size of the ultrasound transducer may also be too large for its usage in small children. In this paper, we report on an augmented reality (AR) visualization system that features continuous hands-free volumetric ultrasound scanning of the surgical anatomy and video imaging from a stereoscopic laparoscope. The acquisition of volumetric ultrasound image is realized by precisely controlling a back-and-forth movement of an ultrasound transducer mounted on a linear slider. Furthermore, the ultrasound volume is refreshed several times per minute. This scanner will sit outside of the body in the envisioned use scenario and could be even integrated into the operating table. An overlay of the maximum intensity projection (MIP) of ultrasound volume on the laparoscopic stereo video through geometric transformations features an AR visualization system particularly suitable for children, because ultrasound is radiation-free and provides higher-quality images in small patients. The proposed AR representation promises to be better than the AR representation using ultrasound slice data.

  19. Developing laser-based therapy monitoring of early caries in pediatric dental settings

    NASA Astrophysics Data System (ADS)

    Zhou, Yaxuan; Jiang, Yang; Kim, Amy S.; Xu, Zheng; Berg, Joel H.; Seibel, Eric J.

    2017-02-01

    Optical imaging modalities and therapy monitoring protocols are required for the emergence of non-surgical interventions for treating infections in teeth to remineralize the enamel. Current standard of visual inspection, tactile probing and radiograph for caries detection is not highly sensitive, quantitative, and safe. Furthermore, the latter two are not viable options for interproximal caries. We present preliminary results of multimodal laser-based imaging and uorescence spectroscopy in a blinded clinical study comparing two topical therapies of early interproximal caries in children. With a spacer placed interproximally both at baseline and followup examinations, the 405-nm excited red porphyrin uorescence imaging with green auto uorescence is measured and compared to a 12-month follow-up. 405-nm laser-induced uorescence spectroscopy is also measured from the center of selected multimodal video imaging frames. These results of three subjects are analyzed both qualitatively by comparing spectra and quantitatively based on uorescence region segmentation, and then are compared to the standard of care(visual examination and radiograph interpretation). Furthermore, this study points out challenges associated with optically monitoring non-surgical dental interventions over long periods of time in clinical practice and also indicates future direction for improvement on the protocol.

  20. Image processing and 3D visualization in the interpretation of patterned injury of the skin

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1995-09-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.

  1. Evaluating a CCD film digitizer: comparing interpretation accuracy with original film readings

    NASA Astrophysics Data System (ADS)

    Gitlin, Joseph N.; Scott, William W., Jr.; Bell, Kathryn; Narayan, Anand

    2002-04-01

    The purpose of the study was to determine if there were differences between the interpretations of radiographic images resulting from digitizing films using a recently developed CCD unit, and the readings of the original films. The general hypothesis to be tested was that there were no significant differences in the measures of accuracy, sensitivity, specificity and ROC analyses when the interpretations related to the two modes were compared. The authors selected 120 radiographic examinations for the study from departmental teaching files, which included chest, abdomen, extremity and other cases that were considered difficult to interpret. The authors also selected six specific abnormalities visualized on 60 of the cases as true positives to classify the reports. After anonymizing the patient identification, the films were digitized and independently interpreted by four board-certified radiologists. Each reader read all of the examinations, half on film alternators and the other half on a high-resolution soft-copy workstation. No reader interpreted the same examination more than once. As of this date, the preliminary results indicate that the hypothesis will be accepted, but more analyses of the data must be performed to confirm the early findings. The additional work will include complete verification of data entry and classification of interpretations, a detailed review of perceived image quality and completion of the ROC analysis by pairs of readers. If the results are confirmed, radiologists, other physicians and administrators will have another reliable option to conventional film practice through increased access to remote primary diagnosis and consultation using high-speed telecommunication media.

  2. Is the Charcot and Bernard case (1883) of loss of visual imagery really based on neurological impairment?

    PubMed

    Zago, Stefano; Allegri, Nicola; Cristoffanini, Marta; Ferrucci, Roberta; Porta, Mauro; Priori, Alberto

    2011-11-01

    INTRODUCTION. The Charcot and Bernard case of visual imagery, Monsieur X, is a classic case in the history of neuropsychology. Published in 1883, it has been considered the first case of visual imagery loss due to brain injury. Also in recent times a neurological valence has been given to it. However, the presence of analogous cases of loss of visual imagery in the psychiatric field have led us to hypothesise functional origins rather than organic. METHODS. In order to assess the validity of such an inference, we have compared the symptomatology of Monsieur X with that found in cases of loss of visual mental images, both psychiatric and neurological, presented in literature. RESULTS. The clinical findings show strong assonances of the Monsieur X case with the symptoms manifested over time by the patients with functionally based loss of visual imagery. CONCLUSION. Although Monsieur X's damage was initially interpreted as neurological, reports of similar symptoms in the psychiatric field lead us to postulate a functional cause for his impairment as well.

  3. Comparison of visual and automated Deki Reader interpretation of malaria rapid diagnostic tests in rural Tanzanian military health facilities.

    PubMed

    Kalinga, Akili K; Mwanziva, Charles; Chiduo, Sarah; Mswanya, Christopher; Ishengoma, Deus I; Francis, Filbert; Temu, Lucky; Mahikwano, Lucas; Mgata, Saidi; Amoo, George; Anova, Lalaine; Wurrapa, Eyako; Zwingerman, Nora; Ferro, Santiago; Bhat, Geeta; Fine, Ian; Vesely, Brian; Waters, Norman; Kreishman-Deitrick, Mara; Hickman, Mark; Paris, Robert; Kamau, Edwin; Ohrt, Colin; Kavishe, Reginald A

    2018-05-29

    Although microscopy is a standard diagnostic tool for malaria and the gold standard, it is infrequently used because of unavailability of laboratory facilities and the absence of skilled readers in poor resource settings. Malaria rapid diagnostic tests (RDT) are currently used instead of or as an adjunct to microscopy. However, at very low parasitaemia (usually < 100 asexual parasites/µl), the test line on malaria rapid diagnostic tests can be faint and consequently hard to visualize and this may potentially affect the interpretation of the test results. Fio Corporation (Canada), developed an automated RDT reader named Deki Reader™ for automatic analysis and interpretation of rapid diagnostic tests. This study aimed to compare visual assessment and automated Deki Reader evaluations to interpret malaria rapid diagnostic tests against microscopy. Unlike in the previous studies where expert laboratory technicians interpreted the test results visually and operated the device, in this study low cadre health care workers who have not attended any formal professional training in laboratory sciences were employed. Finger prick blood from 1293 outpatients with fever was tested for malaria using RDT and Giemsa-stained microscopy for thick and thin blood smears. Blood samples for RDTs were processed according to manufacturers' instructions automated in the Deki Reader. Results of malaria diagnoses were compared between visual and the automated devise reading of RDT and microscopy. The sensitivity of malaria rapid diagnostic test results interpreted by the Deki Reader was 94.1% and that of visual interpretation was 93.9%. The specificity of malaria rapid diagnostic test results was 71.8% and that of human interpretation was 72.0%. The positive predictive value of malaria RDT results by the Deki Reader and visual interpretation was 75.8 and 75.4%, respectively, while the negative predictive values were 92.8 and 92.4%, respectively. The accuracy of RDT as interpreted by DR and visually was 82.6 and 82.1%, respectively. There was no significant difference in performance of RDTs interpreted by either automated DR or visually by unskilled health workers. However, despite the similarities in performance parameters, the device has proven useful because it provides stepwise guidance on processing RDT, data transfer and reporting.

  4. Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy

    NASA Astrophysics Data System (ADS)

    Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline

    2016-12-01

    Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α  0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.

  5. Automated recognition of microcalcification clusters in mammograms

    NASA Astrophysics Data System (ADS)

    Bankman, Isaac N.; Christens-Barry, William A.; Kim, Dong W.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-07-01

    The widespread and increasing use of mammographic screening for early breast cancer detection is placing a significant strain on clinical radiologists. Large numbers of radiographic films have to be visually interpreted in fine detail to determine the subtle hallmarks of cancer that may be present. We developed an algorithm for detecting microcalcification clusters, the most common and useful signs of early, potentially curable breast cancer. We describe this algorithm, which utilizes contour map representations of digitized mammographic films, and discuss its benefits in overcoming difficulties often encountered in algorithmic approaches to radiographic image processing. We present experimental analyses of mammographic films employing this contour-based algorithm and discuss practical issues relevant to its use in an automated film interpretation instrument.

  6. Colour in digital pathology: a review.

    PubMed

    Clarke, Emily L; Treanor, Darren

    2017-01-01

    Colour is central to the practice of pathology because of the use of coloured histochemical and immunohistochemical stains to visualize tissue features. Our reliance upon histochemical stains and light microscopy has evolved alongside a wide variation in slide colour, with little investigation into the implications of colour variation. However, the introduction of the digital microscope and whole-slide imaging has highlighted the need for further understanding and control of colour. This is because the digitization process itself introduces further colour variation which may affect diagnosis, and image analysis algorithms often use colour or intensity measures to detect or measure tissue features. The US Food and Drug Administration have released recent guidance stating the need to develop a method of controlling colour reproduction throughout the digitization process in whole-slide imaging for primary diagnostic use. This comprehensive review introduces applied basic colour physics and colour interpretation by the human visual system, before discussing the importance of colour in pathology. The process of colour calibration and its application to pathology are also included, as well as a summary of the current guidelines and recommendations regarding colour in digital pathology. © 2016 John Wiley & Sons Ltd.

  7. Mapping lava flow textures using three-dimensional measures of surface roughness

    NASA Astrophysics Data System (ADS)

    Mallonee, H. C.; Kobs-Nawotniak, S. E.; McGregor, M.; Hughes, S. S.; Neish, C.; Downs, M.; Delparte, D.; Lim, D. S. S.; Heldmann, J. L.

    2016-12-01

    Lava flow emplacement conditions are reflected in the surface textures of a lava flow; unravelling these conditions is crucial to understanding the eruptive history and characteristics of basaltic volcanoes. Mapping lava flow textures using visual imagery alone is an inherently subjective process, as these images generally lack the resolution needed to make these determinations. Our team has begun mapping lava flow textures using visual spectrum imagery, which is an inherently subjective process involving the challenge of identifying transitional textures such as rubbly and slabby pāhoehoe, as these textures are similar in appearance and defined qualitatively. This is particularly problematic for interpreting planetary lava flow textures, where we have more limited data. We present a tool to objectively classify lava flow textures based on quantitative measures of roughness, including the 2D Hurst exponent, RMS height, and 2D:3D surface area ratio. We collected aerial images at Craters of the Moon National Monument (COTM) using Unmanned Aerial Vehicles (UAVs) in 2015 and 2016 as part of the FINESSE (Field Investigations to Enable Solar System Science and Exploration) and BASALT (Biologic Analog Science Associated with Lava Terrains) research projects. The aerial images were stitched together to create Digital Terrain Models (DTMs) with resolutions on the order of centimeters. The DTMs were evaluated by the classification tool described above, with output compared against field assessment of the texture. Further, the DTMs were downsampled and reevaluated to assess the efficacy of the classification tool at data resolutions similar to current datasets from other planetary bodies. This tool allows objective classification of lava flow texture, which enables more accurate interpretations of flow characteristics. This work also gives context for interpretations of flows with comparatively low data resolutions, such as those on the Moon and Mars. Textural maps based on quantitative measures of roughness are a valuable asset for studies of lava flows on Earth and other planetary bodies.

  8. Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors.

    PubMed

    Preston, Jonathan L; McAllister Byun, Tara; Boyce, Suzanne E; Hamilton, Sarah; Tiede, Mark; Phillips, Emily; Rivera-Campos, Ahmed; Whalen, Douglas H

    2017-01-03

    Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure.

  9. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619

  10. A new integrated dual time-point amyloid PET/MRI data analysis method.

    PubMed

    Cecchin, Diego; Barthel, Henryk; Poggiali, Davide; Cagnin, Annachiara; Tiepolt, Solveig; Zucchetta, Pietro; Turco, Paolo; Gallo, Paolo; Frigo, Anna Chiara; Sabri, Osama; Bui, Franco

    2017-11-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative "dual time-point" indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age and the indexes of the new dual time-point amyloid imaging method in amyloid-negative patients. The method can be considered a valuable tool in both routine clinical practice and in the research setting as it will standardize data regarding amyloid deposition. It could potentially also be used to identify early amyloid plaque deposition in younger subjects in whom treatment could theoretically be more effective.

  11. Earth Science Multimedia Theater

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    1998-01-01

    The presentation will begin with the latest 1998 NASA Earth Science Vision for the next 25 years. A compilation of the 10 days of animations of Hurricane Georges which were supplied daily on NASA to Network television will be shown. NASA's visualizations of Hurricane Bonnie which appeared in the Sept 7 1998 issue of TIME magazine. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1 -min GOES images that will appear in the October BAMS. The visualizations are produced by the Goddard Visualization & Analysis Laboratory, and Scientific Visualization Studio, as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the "Digital-HyperRes-Panorama" Earth Science ETheater'98 recently presented in Tokyo, Paris and Phoenix. The presentation in Paris used a SGI/CRAY Onyx Infinite Reality Super Graphics Workstation at 2560 X 1024 resolution with dual synchronized video Epson 71 00 projectors on a 20ft wide screen. Earth Science Electronic Theater '999 is being prepared for a December 1 st showing at NASA HQ in Washington and January presentation at the AMS meetings in Dallas. The 1999 version of the Etheater will be triple wide with at resolution of 3840 X 1024 on a 60 ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space Museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense Hyperimage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, Hyperimage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing Hyperimage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed Image SpreadSheet (DISS).

  12. How lateral inhibition and fast retinogeniculo-cortical oscillations create vision: A new hypothesis.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Nixon-Shapiro, Elizabeth

    2016-11-01

    The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4cm and works together with membrane potential, correspondingly representing the retinal layers, photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors' individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex, and the retina work in unison to create an infrastructure of visual space that functionally "places" the objects within this "neural" space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina, but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision. This is a theoretical article presenting a novel hypothesis about the physiological processes in vision, and expounds upon the visual aspect of two of our previously published articles, "A unified 3D default space consciousness model combining neurological and physiological processes that underlie conscious experience", and "Functional representation of vision within the mind: A visual consciousness model based in 3D default space." Currently, neuroscience teaches that visual images are initially inverted on the retina, processed in the brain, and then conscious perception of vision happens in the visual cortex. Here, we propose that inversion of visual images never takes place because images enter the retina as coiled and compressed graded potentials that are intensified and amplified in OFF-center photoreceptors. Once they reach the brain, they are decompressed and expanded to the original size of the image, which is perceived by the brain as the external image. We adduce that pre-existing oscillations (alpha, beta, and gamma) among the various cortices in the brain (including the striate and parietal cortex) and the retina, work together in unison to create an infrastructure of visual space thatfunctionally "places" the objects within a "neural" space. These fast oscillations "bring" the faculties of the cortical activity to the retina, creating the infrastructure of the space within the eye where visual information can be immediately recognized by the brain. By this we mean that the visual (striate) cortex synchronizes the information with the photoreceptors in the retina, and the brain instantaneously receives the already processed visual image, thereby relinquishing the eye from being required to send the information to the brain to be interpreted before it can rise to consciousness. The visual system is a heavily studied area of neuroscience yet very little is known about how vision occurs. We believe that our novel hypothesis provides new insights into how vision becomes part of consciousness, helps to reconcile various previously proposed models, and further elucidates current questions in vision based on our unified 3D default space model. Illustrations are provided to aid in explaining our theory. Copyright © 2016. Published by Elsevier Ltd.

  13. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  14. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  15. In Vivo Time-gated Fluorescence Imaging with Biodegradable Luminescent Porous Silicon Nanoparticles

    PubMed Central

    Gu, Luo; Hall, David J.; Qin, Zhengtao; Anglin, Emily; Joo, Jinmyoung; Mooney, David J.; Howell, Stephen B.; Sailor, Michael J.

    2014-01-01

    Fluorescence imaging is one of the most versatile and widely used visualization methods in biomedical research. However, tissue autofluorescence is a major obstacle confounding interpretation of in vivo fluorescence images. The unusually long emission lifetime (5-13 μs) of photoluminescent porous silicon nanoparticles can allow the time-gated imaging of tissues in vivo, completely eliminating shorter-lived (< 10 ns) emission signals from organic chromophores or tissue autofluorescence.Here, using a conventional animal imaging system not optimized for such long-lived excited states, we demonstrate improvement of signal to background contrast ratio by > 50-fold in vitro and by > 20-fold in vivo when imaging porous silicon nanoparticles. Time-gated imaging of porous silicon nanoparticles accumulated in a human ovarian cancer xenograft following intravenous injection is demonstrated in a live mouse. The potential for multiplexing of images in the time domain by using separate porous silicon nanoparticles engineered with different excited state lifetimes is discussed. PMID:23933660

  16. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  17. Arabic word recognizer for mobile applications

    NASA Astrophysics Data System (ADS)

    Khanna, Nitin; Abdollahian, Golnaz; Brame, Ben; Boutin, Mireille; Delp, Edward J.

    2011-03-01

    When traveling in a region where the local language is not written using a "Roman alphabet," translating written text (e.g., documents, road signs, or placards) is a particularly difficult problem since the text cannot be easily entered into a translation device or searched using a dictionary. To address this problem, we are developing the "Rosetta Phone," a handheld device (e.g., PDA or mobile telephone) capable of acquiring an image of the text, locating the region (word) of interest within the image, and producing both an audio and a visual English interpretation of the text. This paper presents a system targeted for interpreting words written in Arabic script. The goal of this work is to develop an autonomous, segmentation-free Arabic phrase recognizer, with computational complexity low enough to deploy on a mobile device. A prototype of the proposed system has been deployed on an iPhone with a suitable user interface. The system was tested on a number of noisy images, in addition to the images acquired from the iPhone's camera. It identifies Arabic words or phrases by extracting appropriate features and assigning "codewords" to each word or phrase. On a dictionary of 5,000 words, the system uniquely mapped (word-image to codeword) 99.9% of the words. The system has a 82% recognition accuracy on images of words captured using the iPhone's built-in camera.

  18. High pitch third generation dual-source CT: Coronary and Cardiac Visualization on Routine Chest CT

    PubMed Central

    Sandfort, Veit; Ahlman, Mark; Jones, Elizabeth; Selwaness, Mariana; Chen, Marcus; Folio, Les; Bluemke, David A.

    2016-01-01

    Background Chest CT scans are frequently performed in radiology departments but have not previously contained detailed depiction of cardiac structures. Objectives To evaluate myocardial and coronary visualization on high-pitch non-gated CT of the chest using 3rd generation dual-source computed tomography (CT). Methods Cardiac anatomy of patients who had 3rd generation, non-gated high pitch contrast enhanced chest CT and who also had prior conventional (low pitch) chest CT as part of a chest abdomen pelvis exam was evaluated. Cardiac image features were scored by reviewers blinded to diagnosis and pitch. Paired analysis was performed. Results 3862 coronary segments and 2220 cardiac structures were evaluated by two readers in 222 CT scans. Most patients (97.2%) had chest CT for oncologic evaluation. The median pitch was 2.34 (IQR 2.05, 2.65) in high pitch and 0.8 (IQR 0.8, 0.8) in low pitch scans (p<0.001). High pitch CT showed higher image visualization scores for all cardiovascular structures compared with conventional pitch scans (p<0.0001). Coronary arteries were visualized in 9 coronary segments per exam in high pitch scans versus 2 segments for conventional pitch (p<0.0001). Radiation exposure was lower in the high pitch group compared with the conventional pitch group (median CTDIvol 10.83 vs. 12.36 mGy and DLP 790 vs. 827 mGycm respectively, p <0.01 for both) with comparable image noise (p=0.43). Conclusion Myocardial structure and coronary arteries are frequently visualized on non-gated 3rd generation chest CT. These results raise the question of whether the heart and coronary arteries should be routinely interpreted on routine chest CT that is otherwise obtained for non-cardiac indications. PMID:27133589

  19. Astronomy textbook images: do they really help students?

    NASA Astrophysics Data System (ADS)

    Testa, Italo; Leccia, Silvio; Puddu, Emanuella

    2014-05-01

    In this paper we present a study on the difficulties secondary school students experience in interpreting textbook images of elementary astronomical phenomena, namely, the changing of the seasons, Sun and lunar eclipses and Moon phases. Six images from a commonly used textbook in Italian secondary schools were selected. Interviews of 45 min about the astronomical concepts related to the images were carried out with eighteen students attending the last year of secondary school (aged 17-18). Students’ responses were analyzed through a semiotic framework based on the different types of visual representation structures. We found that the wide range of difficulties shown by students come from naïve or alternative ideas due to incorrect or inadequate geometric models of the addressed phenomena. As a primary implication of this study, we suggest that teachers should pay attention to specific iconic features of the discussed images, e.g., the compositional structure and the presence of real/symbolic elements.

  20. Secure steganographic communication algorithm based on self-organizing patterns.

    PubMed

    Saunoriene, Loreta; Ragulskis, Minvydas

    2011-11-01

    A secure steganographic communication algorithm based on patterns evolving in a Beddington-de Angelis-type predator-prey model with self- and cross-diffusion is proposed in this paper. Small perturbations of initial states of the system around the state of equilibrium result in the evolution of self-organizing patterns. Small differences between initial perturbations result in slight differences also in the evolving patterns. It is shown that the generation of interpretable target patterns cannot be considered as a secure mean of communication because contours of the secret image can be retrieved from the cover image using statistical techniques if only it represents small perturbations of the initial states of the system. An alternative approach when the cover image represents the self-organizing pattern that has evolved from initial states perturbed using the dot-skeleton representation of the secret image can be considered as a safe visual communication technique protecting both the secret image and communicating parties.

  1. Extraction of Rocky Desertification from Disp Imagery: a Case Study of Liupanshui, Guizhou, China

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Wu, Z.; Wang, W.; Shi, Y.; Mao, G.; Huang, Y.; Jia, B.; Gao, G.; Chen, P.

    2017-09-01

    Karst rocky desertification is a typical type of land degradation in Guizhou Province, China. It causes great ecological and economical implications to the local people. This paper utilized the declassified intelligence satellite photography (DISP) of 1960s to extract the karst rocky desertification area to analyze the early situation of karst rocky desertification in Liupanshui, Guizhou, China. Due to the lack of ground control points and parameters of the satellite, a polynomial orthographic correction model with considering altitude difference correction is proposed for orthorectification of DISP imagery. With the proposed model, the 96 DISP images from four missions are orthorectified. The images are assembled into a seamless image map of the karst area of Guizhou, China. The assembled image map is produced to thematic map of karst rocky desertification by visual interpretation in Liupanshui city. With the assembled image map, extraction of rocky desertification is conducted.

  2. Object localization in handheld thermal images for fireground understanding

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven

    2017-05-01

    Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.

  3. The Ovine Cerebral Venous System: Comparative Anatomy, Visualization, and Implications for Translational Research

    PubMed Central

    Nitzsche, Björn; Lobsien, Donald; Seeger, Johannes; Schneider, Holm; Boltze, Johannes

    2014-01-01

    Cerebrovascular diseases are significant causes of death and disability in humans. Improvements in diagnostic and therapeutic approaches strongly rely on adequate gyrencephalic, large animal models being demanded for translational research. Ovine stroke models may represent a promising approach but are currently limited by insufficient knowledge regarding the venous system of the cerebral angioarchitecture. The present study was intended to provide a comprehensive anatomical analysis of the intracranial venous system in sheep as a reliable basis for the interpretation of experimental results in such ovine models. We used corrosion casts as well as contrast-enhanced magnetic resonance venography to scrutinize blood drainage from the brain. This combined approach yielded detailed and, to some extent, novel findings. In particular, we provide evidence for chordae Willisii and lateral venous lacunae, and report on connections between the dorsal and ventral sinuses in this species. For the first time, we also describe venous confluences in the deep cerebral venous system and an ‘anterior condylar confluent’ as seen in humans. This report provides a detailed reference for the interpretation of venous diagnostic imaging findings in sheep, including an assessment of structure detectability by in vivo (imaging) versus ex vivo (corrosion cast) visualization methods. Moreover, it features a comprehensive interspecies-comparison of the venous cerebral angioarchitecture in man, rodents, canines and sheep as a relevant large animal model species, and describes possible implications for translational cerebrovascular research. PMID:24736654

  4. A new false color composite technique for dust enhancement and point source determination in Middle East

    NASA Astrophysics Data System (ADS)

    Karimi, Khadijeh; Taheri Shahraiyni, Hamid; Habibi Nokhandan, Majid; Hafezi Moghaddas, Naser; Sanaeifar, Melika

    2011-11-01

    The dust storm happens in the Middle East with very high frequency. According to the dust storm effects, it is vital to study on the dust storms in the Middle East. The first step toward the study on dust storm is the enhancement of dust storms and determination of the point sources. In this paper, a new false color composite (FCC) map for the dust storm enhancement and point sources determination in the Middle East has been developed. The 28 Terra-MODIS images in 2008 and 2009 were utilized in this study. We tried to replace the Red, Green and Blue bands in RGB maps with the bands or maps that enhance the dust storms. Hence, famous indices for dust storm detection (NDDI, D and BTD) were generated using the different bands of MODIS images. These indices with some bands of MODIS were utilized for FCC map generation with different combinations. Among the different combinations, four better FCC maps were selected and these four FCC are compared using visual interpretation. The results of visual interpretations showed that the best FCC map for enhancement of dust storm in the middle east is an especial combination of the three indices (Red: D, Green: BTD and Blue: NDDI). Therefore, we utilized of this new FCC method for the enhancement of dust storms and determination of point sources in Middle East.

  5. Informatics in radiology (infoRAD): navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation.

    PubMed

    Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman

    2006-01-01

    The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.

  6. What Do You See?

    ERIC Educational Resources Information Center

    Coleman, Julianne Maner; Goldston, M. Jenice

    2011-01-01

    When students draw observations or interpret and draw a diagram, they're communicating their understandings of science and demonstrating visual literacy abilities. Visual literacy includes skills needed to accurately interpret and produce visual and graphical information such as drawings, diagrams, tables, charts, maps, and graphs. Communication…

  7. Modeling of electron-specimen interaction in scanning electron microscope for e-beam metrology and inspection: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey

    2018-03-01

    The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.

  8. WebGIVI: a web-based gene enrichment analysis and visualization tool.

    PubMed

    Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J

    2017-05-04

    A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .

  9. Progress in high-level exploratory vision

    NASA Astrophysics Data System (ADS)

    Brand, Matthew

    1993-08-01

    We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.

  10. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  11. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders.

    PubMed

    Choi, HeeSun; Geden, Michael; Feng, Jing

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct.

  12. Digital Museum of Retinal Ganglion Cells with Dense Anatomy and Physiology.

    PubMed

    Bae, J Alexander; Mu, Shang; Kim, Jinseop S; Turner, Nicholas L; Tartavull, Ignacio; Kemnitz, Nico; Jordan, Chris S; Norton, Alex D; Silversmith, William M; Prentki, Rachel; Sorek, Marissa; David, Celia; Jones, Devon L; Bland, Doug; Sterling, Amy L R; Park, Jungman; Briggman, Kevin L; Seung, H Sebastian

    2018-05-17

    When 3D electron microscopy and calcium imaging are used to investigate the structure and function of neural circuits, the resulting datasets pose new challenges of visualization and interpretation. Here, we present a new kind of digital resource that encompasses almost 400 ganglion cells from a single patch of mouse retina. An online "museum" provides a 3D interactive view of each cell's anatomy, as well as graphs of its visual responses. The resource reveals two aspects of the retina's inner plexiform layer: an arbor segregation principle governing structure along the light axis and a density conservation principle governing structure in the tangential plane. Structure is related to visual function; ganglion cells with arbors near the layer of ganglion cell somas are more sustained in their visual responses on average. Our methods are potentially applicable to dense maps of neuronal anatomy and physiology in other parts of the nervous system. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras

    PubMed Central

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-01

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144

  14. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    PubMed

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  15. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders

    PubMed Central

    Choi, HeeSun; Geden, Michael

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct. PMID:29240817

  16. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    PubMed

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.

  17. Software complex for geophysical data visualization

    NASA Astrophysics Data System (ADS)

    Kryukov, Ilya A.; Tyugin, Dmitry Y.; Kurkin, Andrey A.; Kurkina, Oxana E.

    2013-04-01

    The effectiveness of current research in geophysics is largely determined by the degree of implementation of the procedure of data processing and visualization with the use of modern information technology. Realistic and informative visualization of the results of three-dimensional modeling of geophysical processes contributes significantly into the naturalness of physical modeling and detailed view of the phenomena. The main difficulty in this case is to interpret the results of the calculations: it is necessary to be able to observe the various parameters of the three-dimensional models, build sections on different planes to evaluate certain characteristics and make a rapid assessment. Programs for interpretation and visualization of simulations are spread all over the world, for example, software systems such as ParaView, Golden Software Surfer, Voxler, Flow Vision and others. However, it is not always possible to solve the problem of visualization with the help of a single software package. Preprocessing, data transfer between the packages and setting up a uniform visualization style can turn into a long and routine work. In addition to this, sometimes special display modes for specific data are required and existing products tend to have more common features and are not always fully applicable to certain special cases. Rendering of dynamic data may require scripting languages that does not relieve the user from writing code. Therefore, the task was to develop a new and original software complex for the visualization of simulation results. Let us briefly list of the primary features that are developed. Software complex is a graphical application with a convenient and simple user interface that displays the results of the simulation. Complex is also able to interactively manage the image, resize the image without loss of quality, apply a two-dimensional and three-dimensional regular grid, set the coordinate axes with data labels and perform slice of data. The feature of geophysical data is their size. Detailed maps used in the simulations are large, thus rendering in real time can be difficult task even for powerful modern computers. Therefore, the performance of the software complex is an important aspect of this work. Complex is based on the latest version of graphic API: Microsoft - DirectX 11, which reduces overhead and harness the power of modern hardware. Each geophysical calculation is the adjustment of the mathematical model for a particular case, so the architecture of the complex visualization is created with the scalability and the ability to customize visualization objects, for better visibility and comfort. In the present study, software complex 'GeoVisual' was developed. One of the main features of this research is the use of bleeding-edge techniques of computer graphics in scientific visualization. The research was supported by The Ministry of education and science of Russian Federation, project 14.B37.21.0642.

  18. Criteria for the optimal selection of remote sensing optical images to map event landslides

    NASA Astrophysics Data System (ADS)

    Fiorucci, Federica; Giordan, Daniele; Santangelo, Michele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto

    2018-01-01

    Landslides leave discernible signs on the land surface, most of which can be captured in remote sensing images. Trained geomorphologists analyse remote sensing images and map landslides through heuristic interpretation of photographic and morphological characteristics. Despite a wide use of remote sensing images for landslide mapping, no attempt to evaluate how the image characteristics influence landslide identification and mapping exists. This paper presents an experiment to determine the effects of optical image characteristics, such as spatial resolution, spectral content and image type (monoscopic or stereoscopic), on landslide mapping. We considered eight maps of the same landslide in central Italy: (i) six maps obtained through expert heuristic visual interpretation of remote sensing images, (ii) one map through a reconnaissance field survey, and (iii) one map obtained through a real-time kinematic (RTK) differential global positioning system (dGPS) survey, which served as a benchmark. The eight maps were compared pairwise and to a benchmark. The mismatch between each map pair was quantified by the error index, E. Results show that the map closest to the benchmark delineation of the landslide was obtained using the higher resolution image, where the landslide signature was primarily photographical (in the landslide source and transport area). Conversely, where the landslide signature was mainly morphological (in the landslide deposit) the best mapping result was obtained using the stereoscopic images. Albeit conducted on a single landslide, the experiment results are general, and provide useful information to decide on the optimal imagery for the production of event, seasonal and multi-temporal landslide inventory maps.

  19. Life in unexpected places: Employing visual thinking strategies in global health training.

    PubMed

    Allison, Jill; Mulay, Shree; Kidd, Monica

    2017-01-01

    The desire to make meaning out of images, metaphor, and other representations indicates higher-order cognitive skills that can be difficult to teach, especially in the complex and unfamiliar environments like those encountered in many global health experiences. Because reflecting on art can help develop medical students' imaginative and interpretive skills, we used visual thinking strategies (VTS) during an immersive 4-week global health elective for medical students to help them construct new understanding of the social determinants of health in a low-resource setting. We were aware of no previous formal efforts to use art in global health training. We assembled a group of eight medical students in front of a street mural in Kathmandu and used VTS methods to interpret the scene with respect to the social determinants of health. We recorded and transcribed the conversation and conducted a thematic analysis of student responses. Students shared observations about the mural in a supportive, nonjudgmental fashion. Two main themes emerged from their observations: those of human-environment interactions (specifically community dynamics, subsistence land use, resources, and health) and entrapment/control, particularly relating to expectations of, and demands on, women in traditional farming communities. They used the images as well as their experience in Nepali communities to consolidate complex community health concepts. VTS helped students articulate their deepening understanding of the social determinants of health in Nepal, suggesting that reflection on visual art can help learners apply, analyze, and evaluate complex concepts in global health. We demonstrate the relevance of drawing upon many aspects of cultural learning, regarding art as a kind of text that holds valuable information. These findings may help provide innovative opportunities for teaching and evaluating global health training in the future.

  20. Application Of Empirical Phase Diagrams For Multidimensional Data Visualization Of High Throughput Microbatch Crystallization Experiments.

    PubMed

    Klijn, Marieke E; Hubbuch, Jürgen

    2018-04-27

    Protein phase diagrams are a tool to investigate cause and consequence of solution conditions on protein phase behavior. The effects are scored according to aggregation morphologies such as crystals or amorphous precipitates. Solution conditions affect morphological features, such as crystal size, as well as kinetic features, such as crystal growth time. Common used data visualization techniques include individual line graphs or symbols-based phase diagrams. These techniques have limitations in terms of handling large datasets, comprehensiveness or completeness. To eliminate these limitations, morphological and kinetic features obtained from crystallization images generated with high throughput microbatch experiments have been visualized with radar charts in combination with the empirical phase diagram (EPD) method. Morphological features (crystal size, shape, and number, as well as precipitate size) and kinetic features (crystal and precipitate onset and growth time) are extracted for 768 solutions with varying chicken egg white lysozyme concentration, salt type, ionic strength and pH. Image-based aggregation morphology and kinetic features were compiled into a single and easily interpretable figure, thereby showing that the EPD method can support high throughput crystallization experiments in its data amount as well as its data complexity. Copyright © 2018. Published by Elsevier Inc.

  1. Serial dependence in the perception of attractiveness.

    PubMed

    Xia, Ye; Leib, Allison Yamanashi; Whitney, David

    2016-12-01

    The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.

  2. Building simple multiscale visualizations of outcrop geology using virtual reality modeling language (VRML)

    NASA Astrophysics Data System (ADS)

    Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming

    2005-08-01

    Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniela Ushizima, Wes Bethel

    Quant-CT is currently a plugin to ImageJ, designed as a Java-class that provides control mechanism for the user to choose volumes of interest within porous material, followed by the selection of image subsamples for automated tuning of parameters for filters and classifiers, and finally measurement of material geometry, porosity, and visualization. Denoising is mandatory before any image interpretation, and we implemented a new 3D java code that performs bilateral filtering of data. Segmentation of the dense material is essential before any quantifications about geological sample structure, and we invented new schemes to deal with over segmentation when using statistical regionmore » merging algorithm to pull out grains that compose imaged material. It make uses of ImageJ API and other standard and thirty-party APIs. Quant-CT conception started in 2011 under Scidac-e sponsor, and details of the first prototype were documented in publications below. While it is used right now for microtomography images, it can potentially be used by anybody with 3D image data obtained by experiment or produced by simulation.« less

  4. On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-21

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  5. On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-01

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  6. Complex noise suppression using a sparse representation and 3D filtering of images

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.

    2017-08-01

    A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.

  7. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  8. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.

  9. SPECT in patients with cortical visual loss.

    PubMed

    Silverman, I E; Galetta, S L; Gray, L G; Moster, M; Atlas, S W; Maurer, A H; Alavi, A

    1993-09-01

    Single-photon emission computed tomography (SPECT) with 99mTc-hexamethylpropyleneamine oxime (HMPAO) was used to investigate changes in cerebral blood flow in seven patients with cortical visual impairment. Traumatic brain injury (TBI) was the cause of cortical damage in two patients, cerebral ischemia in two patients and carbon monoxide (CO) poisoning, status epilepticus and Alzheimer's Disease (AD) each in three separate patients. The SPECT scans of the seven patients were compared to T2-weighted magnetic resonance image (MRI) scans of the brain to determine the correlation between functional and anatomical findings. In six of the seven patients, the qualitative interpretation of the SPECT studies supported the clinical findings (i.e., the visual field defect) by revealing altered regional cerebral blood flow (rCBF) in the appropriate regions of the visual pathway. MR scans in all of the patients, on the other hand, were either normal or disclosed smaller lesions than those detected by SPECT. We conclude that SPECT may reveal altered rCBF in patients with cortical visual impairment of various etiologies, even when MRI studies are normal or nondiagnostic.

  10. Super-resolved thickness maps of thin film phantoms and in vivo visualization of tear film lipid layer using OCT

    PubMed Central

    dos Santos, Valentin Aranha; Schmetterer, Leopold; Triggs, Graham J.; Leitgeb, Rainer A.; Gröschl, Martin; Messner, Alina; Schmidl, Doreen; Garhofer, Gerhard; Aschinger, Gerold; Werkmeister, René M.

    2016-01-01

    In optical coherence tomography (OCT), the axial resolution is directly linked to the coherence length of the employed light source. It is currently unclear if OCT allows measuring thicknesses below its axial resolution value. To investigate spectral-domain OCT imaging in the super-resolution regime, we derived a signal model and compared it with the experiment. Several island thin film samples of known refractive indices and thicknesses in the range 46 – 163 nm were fabricated and imaged. Reference thickness measurements were performed using a commercial atomic force microscope. In vivo measurements of the tear film were performed in 4 healthy subjects. Our results show that quantitative super-resolved thickness measurement can be performed using OCT. In addition, we report repeatable tear film lipid layer visualization. Our results provide a novel interpretation of the OCT axial resolution limit and open a perspective to deeper extraction of the information hidden in the coherence volume. PMID:27446696

  11. Marginalizing Women: Images of Pregnancy in Williams Obstetrics

    PubMed Central

    Smith, Sheila A.; Condit, Deirdre M.

    2000-01-01

    This research analyzes the historical development of the medical construction of the pregnant body in 17 of 20 editions of Williams Obstetrics, an obstetrical textbook published continually from 1904 to 1997. Examination of the visual imagery of these works produced three key findings. First, depictions of the healthy or “normal” pregnant body are virtually absent throughout the series. Second, visual depictions of women's full bodies adhere to a race-based hierarchy of presentation. Finally, the fundamental discourse about pregnant and female bodies communicated to physicians (primarily) by these images is one of pathology and fragmentation. We conclude that the resulting social and medical construction of the pregnant and female body presented in the Williams series is one of disembodiment, abjection, and ultimately marginality. These findings support recent feminist research that criticizes both the increasing erasure of the person of the women from the medical interpretation of pregnancy and the concomitant decrease in women's perceived sense of empowerment as pregnant beings. PMID:17273202

  12. Visual-search model observer for assessing mass detection in CT

    NASA Astrophysics Data System (ADS)

    Karbaschi, Zohreh; Gifford, Howard C.

    2017-03-01

    Our aim is to devise model observers (MOs) to evaluate acquisition protocols in medical imaging. To optimize protocols for human observers, an MO must reliably interpret images containing quantum and anatomical noise under aliasing conditions. In this study of sampling parameters for simulated lung CT, the lesion-detection performance of human observers was compared with that of visual-search (VS) observers, a channelized nonprewhitening (CNPW) observer, and a channelized Hoteling (CH) observer. Scans of a mathematical torso phantom modeled single-slice parallel-hole CT with varying numbers of detector pixels and angular projections. Circular lung lesions had a fixed radius. Twodimensional FBP reconstructions were performed. A localization ROC study was conducted with the VS, CNPW and human observers, while the CH observer was applied in a location-known ROC study. Changing the sampling parameters had negligible effect on the CNPW and CH observers, whereas several VS observers demonstrated a sensitivity to sampling artifacts that was in agreement with how the humans performed.

  13. SVGMap: configurable image browser for experimental data.

    PubMed

    Rafael-Palou, Xavier; Schroeder, Michael P; Lopez-Bigas, Nuria

    2012-01-01

    Spatial data visualization is very useful to represent biological data and quickly interpret the results. For instance, to show the expression pattern of a gene in different tissues of a fly, an intuitive approach is to draw the fly with the corresponding tissues and color the expression of the gene in each of them. However, the creation of these visual representations may be a burdensome task. Here we present SVGMap, a java application that automatizes the generation of high-quality graphics for singular data items (e.g. genes) and biological conditions. SVGMap contains a browser that allows the user to navigate the different images created and can be used as a web-based results publishing tool. SVGMap is freely available as precompiled java package as well as source code at http://bg.upf.edu/svgmap. It requires Java 6 and any recent web browser with JavaScript enabled. The software can be run on Linux, Mac OS X and Windows systems. nuria.lopez@upf.edu

  14. Applied Imagistics of Ischaemic Heart a Survey. From the Epidemiology of Stable Angina In Order to Better Prevent Sudden Cardiac Death

    NASA Astrophysics Data System (ADS)

    Petruse, Radu Emanuil; Batâr, Sergiu; Cojan, Adela; Maniţiu, Ioan

    2014-11-01

    Coronary computed tomography angiography (CCTA) allows coronary artery visualization and the detection of coronary stenoses. In addition; it has been suggested as a novel, noninvasive modality for coronary atherosclerotic plaque detection, characterization, and quantification. Accurate identification of coronary plaques is challenging, especially for the noncalcified plaques, due to many factors such as the small size of coronary arteries, reconstruction artifacts caused by irregular heartbeats, beam hardening, and partial volume averaging. The development of 16, 32, 64 and the latest 320 row multidetector CT not only increases the spatial and the temporal resolution significantly, but also increases the number of images to be interpreted by radiologists substantially. Radiologists have to visually examine each coronary artery for suspicious stenosis using visualization tools such as multiplanar reformatting (MPR) and curved planar reformatting (CPR) provided by the review workstation in clinical practice

  15. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  16. Ambiguous science and the visual representation of the real

    NASA Astrophysics Data System (ADS)

    Newbold, Curtis Robert

    The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in visual representations of science inevitably creates ambiguous meanings. As a means to improve visual literacy in the sciences, I call awareness to the ubiquity of visual ambiguity and its importance and relevance in scientific discourse. To do this, I conduct a literature review that spans interdisciplinary research in communication, science, art, and rhetoric. Furthermore, I create a paradoxically ambiguous taxonomy, which functions to exploit the nuances of visual ambiguities and their role in scientific communication. I then extrapolate the taxonomy of visual ambiguity and from it develop an ambiguous, rhetorical heuristic, the Tetradic Model of Visual Ambiguity. The Tetradic Model is applied to a case example of a scientific image as a demonstration of how scientific communicators may increase their awareness of the epistemological effects of ambiguity in the visual representations of science. I conclude by demonstrating how scientific communicators may make productive use of visual ambiguity, even in communications of objective science, and I argue how doing so strengthens scientific communicators' visual literacy skills and their ability to communicate more ethically and effectively.

  17. Preliminary study of near surface detections at geothermal field using optic and SAR imageries

    NASA Astrophysics Data System (ADS)

    Kurniawahidayati, Beta; Agoes Nugroho, Indra; Syahputra Mulyana, Reza; Saepuloh, Asep

    2017-12-01

    Current remote sensing technologies shows that surface manifestation of geothermal system could be detected with optical and SAR remote sensing, but to assess target beneath near the surface layer with the surficial method needs a further study. This study conducts a preliminary result using Optic and SAR remote sensing imagery to detect near surface geothermal manifestation at and around Mt. Papandayan, West Java, Indonesia. The data used in this study were Landsat-8 OLI/TIRS for delineating geothermal manifestation prospect area and an Advanced Land Observing Satellite(ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) level 1.1 for extracting lineaments and their density. An assumption was raised that the lineaments correlated with near surface structures due to long L-band wavelength about 23.6 cm. Near surface manifestation prospect area are delineated using visual comparison between Landsat 8 RGB True Colour Composite band 4,3,2 (TCC), False Colour Composite band 5,6,7 (FCC), and lineament density map of ALOS PALSAR. Visual properties of ground object were distinguished from interaction of the electromagnetic radiation and object whether it reflect, scatter, absorb, or and emit electromagnetic radiation based on characteristic of their molecular composition and their macroscopic scale and geometry. TCC and FCC composite bands produced 6 and 7 surface manifestation zones according to its visual classification, respectively. Classified images were then compared to a Normalized Different Vegetation Index (NDVI) to obtain the influence of vegetation at the ground surface to the image. Geothermal area were classified based on vegetation index from NDVI. TCC image is more sensitive to the vegetation than FCC image. The later composite produced a better result for identifying visually geothermal manifestation showed by detail-detected zones. According to lineament density analysis high density area located on the peak of Papandayan overlaid with zone 1 and 2 of FCC. Comparing to the extracted lineament density, we interpreted that the near surface manifestation is located at zone 1 and 2 of FCC image.

  18. Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations

    ERIC Educational Resources Information Center

    Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.

    2005-01-01

    Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each…

  19. Intraoperative efficiency of fluorescence imaging by Visually Enhanced Lesion Scope (VELscope) in patients with bisphosphonate related osteonecrosis of the jaw (BRONJ).

    PubMed

    Assaf, Alexandre T; Zrnc, Tomislav A; Riecke, Björn; Wikner, Johannes; Zustin, Jozef; Friedrich, Reinhard E; Heiland, Max; Smeets, Ralf; Gröbe, Alexander

    2014-07-01

    The purpose of this study was to determine the potential of tissue fluorescence imaging by using Visually Enhanced Lesion Scope (VELscope) for the detection of osteonecrosis of the jaw induced by bisphosphonates (BRONJ). We investigated 20 patients (11 females and 9 males; mean age 74 years, standard deviation ± 6.4 years), over a period of 18 month with the diagnosis of BRONJ in this prospective cohort study. All patients received doxycycline as a fluorescending marker for osseous structures. VELscope has been used intraoperatively using the loss of fluorescence to detect presence of osteonecrosis. Osseous biopsies were taken to confirm definite histopathological diagnosis of BRONJ in each case. Diagnosis of BRONJ was confirmed for every patient. In all patients except one, VELscope was sufficient to differentiate between healthy and necrotic bone by visual fluorescence retention (VFR) and visual fluorescence loss (VFL). 19 cases out of a total of 20 showed no signs of recurrence of BRONJ during follow-up (mean 12 months, range 4-18 months). VELscope examination is a suitable tool to visualize necrotic areas of the bone in patients with bisphosphonate related osteonecrosis of the jaw. Loss of fluorescence in necrotic bone areas is useful intraoperatively as a tool for fluorescence-guided bone resection with relevant clinical interpretation. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Refinement of ground reference data with segmented image data

    NASA Technical Reports Server (NTRS)

    Robinson, Jon W.; Tilton, James C.

    1991-01-01

    One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.

  1. The effects of altered intrathoracic pressure on resting cerebral blood flow and its response to visual stimulation

    PubMed Central

    Hayen, Anja; Herigstad, Mari; Kelly, Michael; Okell, Thomas W.; Murphy, Kevin; Wise, Richard G.; Pattinson, Kyle T.S.

    2013-01-01

    Investigating how intrathoracic pressure changes affect cerebral blood flow (CBF) is important for a clear interpretation of neuroimaging data in patients with abnormal respiratory physiology, intensive care patients receiving mechanical ventilation and in research paradigms that manipulate intrathoracic pressure. Here, we investigated the effect of experimentally increased and decreased intrathoracic pressures upon CBF and the stimulus-evoked CBF response to visual stimulation. Twenty healthy volunteers received intermittent inspiratory and expiratory loads (plus or minus 9 cmH2O for 270 s) and viewed an intermittent 2 Hz flashing checkerboard, while maintaining stable end-tidal CO2. CBF was recorded with transcranial Doppler sonography (TCD) and whole-brain pseudo-continuous arterial spin labeling magnetic resonance imaging (PCASL MRI). Application of inspiratory loading (negative intrathoracic pressure) showed an increase in TCD-measured CBF of 4% and a PCASL-measured increase in grey matter CBF of 5%, but did not alter mean arterial pressure (MAP). Expiratory loading (positive intrathoracic pressure) did not alter CBF, while MAP increased by 3%. Neither loading condition altered the perfusion response to visual stimulation in the primary visual cortex. In both loading conditions localized CBF increases were observed in the somatosensory and motor cortices, and in the cerebellum. Altered intrathoracic pressures, whether induced experimentally, therapeutically or through a disease process, have possible significant effects on CBF and should be considered as a potential systematic confound in the interpretation of perfusion-based neuroimaging data. PMID:23108273

  2. Concrete bridge deck early problem detection and mitigation using robotics

    NASA Astrophysics Data System (ADS)

    Gucunski, Nenad; Yi, Jingang; Basily, Basily; Duong, Trung; Kim, Jinyoung; Balaguru, Perumalsamy; Parvardeh, Hooman; Maher, Ali; Najm, Husam

    2015-04-01

    More economical management of bridges can be achieved through early problem detection and mitigation. The paper describes development and implementation of two fully automated (robotic) systems for nondestructive evaluation (NDE) and minimally invasive rehabilitation of concrete bridge decks. The NDE system named RABIT was developed with the support from Federal Highway Administration (FHWA). It implements multiple NDE technologies, namely: electrical resistivity (ER), impact echo (IE), ground-penetrating radar (GPR), and ultrasonic surface waves (USW). In addition, the system utilizes advanced vision to substitute traditional visual inspection. The RABIT system collects data at significantly higher speeds than it is done using traditional NDE equipment. The associated platform for the enhanced interpretation of condition assessment in concrete bridge decks utilizes data integration, fusion, and deterioration and defect visualization. The interpretation and visualization platform specifically addresses data integration and fusion from the four NDE technologies. The data visualization platform facilitates an intuitive presentation of the main deterioration due to: corrosion, delamination, and concrete degradation, by integrating NDE survey results and high resolution deck surface imaging. The rehabilitation robotic system was developed with the support from National Institute of Standards and Technology-Technology Innovation Program (NIST-TIP). The system utilizes advanced robotics and novel materials to repair problems in concrete decks, primarily early stage delamination and internal cracking, using a minimally invasive approach. Since both systems use global positioning systems for navigation, some of the current efforts concentrate on their coordination for the most effective joint evaluation and rehabilitation.

  3. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  4. Automated Interpretation of Subcellular Patterns in Fluorescence Microscope Images for Location Proteomics

    PubMed Central

    Chen, Xiang; Velliste, Meel; Murphy, Robert F.

    2010-01-01

    Proteomics, the large scale identification and characterization of many or all proteins expressed in a given cell type, has become a major area of biological research. In addition to information on protein sequence, structure and expression levels, knowledge of a protein’s subcellular location is essential to a complete understanding of its functions. Currently subcellular location patterns are routinely determined by visual inspection of fluorescence microscope images. We review here research aimed at creating systems for automated, systematic determination of location. These employ numerical feature extraction from images, feature reduction to identify the most useful features, and various supervised learning (classification) and unsupervised learning (clustering) methods. These methods have been shown to perform significantly better than human interpretation of the same images. When coupled with technologies for tagging large numbers of proteins and high-throughput microscope systems, the computational methods reviewed here enable the new subfield of location proteomics. This subfield will make critical contributions in two related areas. First, it will provide structured, high-resolution information on location to enable Systems Biology efforts to simulate cell behavior from the gene level on up. Second, it will provide tools for Cytomics projects aimed at characterizing the behaviors of all cell types before, during and after the onset of various diseases. PMID:16752421

  5. Mapping Nearshore Seagrass and Colonized Hard Bottom Spatial Distribution and Percent Biological Cover in Florida, USA Using Object Based Image Analysis of WorldView-2 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Baumstark, R. D.; Duffey, R.; Pu, R.

    2016-12-01

    The offshore extent of seagrass habitat along the West Florida (USA) coast represents an important corridor for inshore-offshore migration of economically important fish and shellfish. Surviving at the fringe of light requirements, offshore seagrass beds are sensitive to changes in water clarity. Beyond and intermingled with the offshore seagrass areas are large swaths of colonized hard bottom. These offshore habitats of the West Florida coast have lacked mapping efforts needed for status and trends monitoring. The objective of this study was to propose an object-based classification method for mapping offshore habitats and to compare results to traditional photo-interpreted maps. Benthic maps depicting the spatial distribution and percent biological cover were created from WorldView-2 satellite imagery using Object Based Image Analysis (OBIA) method and a visual photo-interpretation method. A logistic regression analysis identified depth and distance from shore as significant parameters for discriminating spectrally similar seagrass and colonized hard bottom features. Seagrass, colonized hard bottom and unconsolidated sediment (sand) were mapped with 78% overall accuracy using the OBIA method compared to 71% overall accuracy using the photo-interpretation method. This study presents an alternative for mapping deeper, offshore habitats capable of producing higher thematic (percent biological cover) and spatial resolution maps compared to those created with the traditional photo-interpretation method.

  6. Value of automatic patient motion detection and correction in myocardial perfusion imaging using a CZT-based SPECT camera.

    PubMed

    van Dijk, Joris D; van Dalen, Jorn A; Mouden, Mohamed; Ottervanger, Jan Paul; Knollema, Siert; Slump, Cornelis H; Jager, Pieter L

    2018-04-01

    Correction of motion has become feasible on cadmium-zinc-telluride (CZT)-based SPECT cameras during myocardial perfusion imaging (MPI). Our aim was to quantify the motion and to determine the value of automatic correction using commercially available software. We retrospectively included 83 consecutive patients who underwent stress-rest MPI CZT-SPECT and invasive fractional flow reserve (FFR) measurement. Eight-minute stress acquisitions were reformatted into 1.0- and 20-second bins to detect respiratory motion (RM) and patient motion (PM), respectively. RM and PM were quantified and scans were automatically corrected. Total perfusion deficit (TPD) and SPECT interpretation-normal, equivocal, or abnormal-were compared between the noncorrected and corrected scans. Scans with a changed SPECT interpretation were compared with FFR, the reference standard. Average RM was 2.5 ± 0.4 mm and maximal PM was 4.5 ± 1.3 mm. RM correction influenced the diagnostic outcomes in two patients based on TPD changes ≥7% and in nine patients based on changed visual interpretation. In only four of these patients, the changed SPECT interpretation corresponded with FFR measurements. Correction for PM did not influence the diagnostic outcomes. Respiratory motion and patient motion were small. Motion correction did not appear to improve the diagnostic outcome and, hence, the added value seems limited in MPI using CZT-based SPECT cameras.

  7. From Iconic to Lingual: Interpreting Visual Statements.

    ERIC Educational Resources Information Center

    Curtiss, Deborah

    In this age of proliferating visual communications, there is a permissiveness in subject matter, content, and meaning that is exhilarating, yet overwhelming to interpret in a meaningful or consensual way. By recognizing visual statements, whether a piece of sculpture, an advertisement, a video, or a building, as communication, one can approach…

  8. Community-Acquired Pneumonia Visualized on CT Scans but Not Chest Radiographs: Pathogens, Severity, and Clinical Outcomes.

    PubMed

    Upchurch, Cameron P; Grijalva, Carlos G; Wunderink, Richard G; Williams, Derek J; Waterer, Grant W; Anderson, Evan J; Zhu, Yuwei; Hart, Eric M; Carroll, Frank; Bramley, Anna M; Jain, Seema; Edwards, Kathryn M; Self, Wesley H

    2018-03-01

    The clinical significance of pneumonia visualized on CT scan in the setting of a normal chest radiograph is uncertain. In a multicenter prospective surveillance study of adults hospitalized with community-acquired pneumonia (CAP), we compared the presenting clinical features, pathogens present, and outcomes of patients with pneumonia visualized on a CT scan but not on a concurrent chest radiograph (CT-only pneumonia) and those with pneumonia visualized on a chest radiograph. All patients underwent chest radiography; the decision to obtain CT imaging was determined by the treating clinicians. Chest radiographs and CT images were interpreted by study-dedicated thoracic radiologists blinded to the clinical data. The study population included 2,251 adults with CAP; 2,185 patients (97%) had pneumonia visualized on chest radiography, whereas 66 patients (3%) had pneumonia visualized on CT scan but not on concurrent chest radiography. Overall, these patients with CT-only pneumonia had a clinical profile similar to those with pneumonia visualized on chest radiography, including comorbidities, vital signs, hospital length of stay, prevalence of viral (30% vs 26%) and bacterial (12% vs 14%) pathogens, ICU admission (23% vs 21%), use of mechanical ventilation (6% vs 5%), septic shock (5% vs 4%), and inhospital mortality (0 vs 2%). Adults hospitalized with CAP who had radiological evidence of pneumonia on CT scan but not on concurrent chest radiograph had pathogens, disease severity, and outcomes similar to patients who had signs of pneumonia on chest radiography. These findings support using the same management principles for patients with CT-only pneumonia and those with pneumonia seen on chest radiography. Copyright © 2017 American College of Chest Physicians. All rights reserved.

  9. Breast cancer sentinel node scintigraphy: differences between imaging results 1 and 2 h after injection.

    PubMed

    Wondergem, Maurits; Hobbelink, Monique G G; Witkamp, Arjen J; van Hillegersberg, Richard; de Keizer, Bart

    2012-11-01

    Timing of image acquisition in breast cancer sentinel node scintigraphy remains a subject of debate. Therefore, the performance of our protocol in which images are acquired 1 and 2 h after injection was evaluated. The results of sentinel node scintigraphy 1 and 2 h after injection were compared with regard to the sentinel lymph nodes visualized. We studied 132 patients who were consecutively referred for sentinel lymph node biopsy. 99mTc-albumine nanocolloid (120 MBq) was injected peritumourally into patients with palpable tumours and intratumourally into patients with nonpalpable tumours. All scintigraphic images taken for the sentinel node procedure were evaluated. The number of sentinel nodes per anatomic localization and the interpretability of the images were scored. A total of 132 patients underwent sentinel node scintigraphy 1 h after injection. Of these, 117 patients also underwent sentinel node scintigraphy 2 h after injection. An axillary sentinel node was visualized in 79.5 and 95.7% of patients, respectively, 1 and 2 h after injection. In 20.5% of the patients the images acquired 1 h after injection did not show a sentinel node. Furthermore, in all procedures, the images 1 h after injection were of no added value to those acquired 2 h after injection. Scintigraphic imaging 2 h after a single peritumoural or intratumoural administration of about 120 MBq 99mTc-albumine nanocolloid yields an axillary sentinel node in over 95% of cases. Imaging 1 h after injection is of no additional value and can be omitted.

  10. Computed tomographic angiography in stroke imaging: fundamental principles, pathologic findings, and common pitfalls.

    PubMed

    Gupta, Rajiv; Jones, Stephen E; Mooyaart, Eline A Q; Pomerantz, Stuart R

    2006-06-01

    The development of multidetector row computed tomography (MDCT) now permits visualization of the entire vascular tree that is relevant for the management of stroke within 15 seconds. Advances in MDCT have brought computed tomography angiography (CTA) to the frontline in evaluation of stroke. CTA is a rapid and noninvasive modality for evaluating the neurovasculature. This article describes the role of CTA in the management of stroke. Fundamentals of contrast delivery, common pathologic findings, artifacts, and pitfalls in CTA interpretation are discussed.

  11. Usefulness of myocardial parametric imaging to evaluate myocardial viability in experimental and in clinical studies

    PubMed Central

    Korosoglou, G; Hansen, A; Bekeredjian, R; Filusch, A; Hardt, S; Wolf, D; Schellberg, D; Katus, H A; Kuecherer, H

    2006-01-01

    Objective To evaluate whether myocardial parametric imaging (MPI) is superior to visual assessment for the evaluation of myocardial viability. Methods and results Myocardial contrast echocardiography (MCE) was assessed in 11 pigs before, during, and after left anterior descending coronary artery occlusion and in 32 patients with ischaemic heart disease by using intravenous SonoVue administration. In experimental studies perfusion defect area assessment by MPI was compared with visually guided perfusion defect planimetry. Histological assessment of necrotic tissue was the standard reference. In clinical studies viability was assessed on a segmental level by (1) visual analysis of myocardial opacification; (2) quantitative estimation of myocardial blood flow in regions of interest; and (3) MPI. Functional recovery between three and six months after revascularisation was the standard reference. In experimental studies, compared with visually guided perfusion defect planimetry, planimetric assessment of infarct size by MPI correlated more significantly with histology (r2  =  0.92 versus r2  =  0.56) and had a lower intraobserver variability (4% v 15%, p < 0.05). In clinical studies, MPI had higher specificity (66% v 43%, p < 0.05) than visual MCE and good accuracy (81%) for viability detection. It was less time consuming (3.4 (1.6) v 9.2 (2.4) minutes per image, p < 0.05) than quantitative blood flow estimation by regions of interest and increased the agreement between observers interpreting myocardial perfusion (κ  =  0.87 v κ  =  0.75, p < 0.05). Conclusion MPI is useful for the evaluation of myocardial viability both in animals and in patients. It is less time consuming than quantification analysis by regions of interest and less observer dependent than visual analysis. Thus, strategies incorporating this technique may be valuable for the evaluation of myocardial viability in clinical routine. PMID:15939722

  12. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  13. Partial dependence of breast tumor malignancy on ultrasound image features derived from boosted trees

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Zhang, Su; Li, Wenying; Chen, Yaqing; Lu, Hongtao; Chen, Wufan; Chen, Yazhu

    2010-04-01

    Various computerized features extracted from breast ultrasound images are useful in assessing the malignancy of breast tumors. However, the underlying relationship between the computerized features and tumor malignancy may not be linear in nature. We use the decision tree ensemble trained by the cost-sensitive boosting algorithm to approximate the target function for malignancy assessment and to reflect this relationship qualitatively. Partial dependence plots are employed to explore and visualize the effect of features on the output of the decision tree ensemble. In the experiments, 31 image features are extracted to quantify the sonographic characteristics of breast tumors. Patient age is used as an external feature because of its high clinical importance. The area under the receiver-operating characteristic curve of the tree ensembles can reach 0.95 with sensitivity of 0.95 (61/64) at the associated specificity 0.74 (77/104). The partial dependence plots of the four most important features are demonstrated to show the influence of the features on malignancy, and they are in accord with the empirical observations. The results can provide visual and qualitative references on the computerized image features for physicians, and can be useful for enhancing the interpretability of computer-aided diagnosis systems for breast ultrasound.

  14. Imaging of karsts on buried carbonate platform in Central Luconia Province, Malaysia

    NASA Astrophysics Data System (ADS)

    Nur Fathiyah Jamaludin, Siti; Mubin, Mukhriz; Latiff, Abdul Halim Abdul

    2017-10-01

    Imaging of carbonate rocks in the subsurface through seismic method is always challenging due to its heterogeneity and fast velocity compared to the other rock types. Existence of karsts features on the carbonate rocks make it more complicated to interpret the reflectors. Utilization of modern interpretation software such as PETREL and GeoTeric® to image the karsts morphology make it possible to model the karst network within the buried carbonate platform used in this study. Using combination of different seismic attributes such as Variance, Conformance, Continuity, Amplitude, Frequency and Edge attributes, we are able to image the karsts features that are available in the proven gas-field in Central Luconia Province, Malaysia. The mentioned attributes are excellent in visualize and image the stratigraphic features based on the difference in their acoustic impedance as well as structural features, which include karst. 2D & 3D Karst Models were developed to give a better understanding on the characteristics of the identified karsts. From the models, it is found that the karsts are concentrated in the top part of the carbonate reservoir (epikarst) and the middle layer with some of them becomes extensive and create karst networks, either laterally or vertically. Most of the vertical network karst are related to the existence of faults that displaced all the horizons in the carbonate platform.

  15. The Role of 18F-FDG PET/CT Integrated Imaging in Distinguishing Malignant from Benign Pleural Effusion.

    PubMed

    Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan; Lu, Peiou

    2016-01-01

    The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively.

  16. Interrogation of patient data delivered to the operating theatre during hepato-pancreatic surgery using high-performance computing.

    PubMed

    John, Nigel W; McCloy, Rory F; Herrman, Simone

    2004-01-01

    The Op3D visualization system allows, for the first time, a surgeon in the operating theatre to interrogate patient-specific medical data sets rendered in three dimensions using high-performance computing. The hypothesis of this research is that the success rate of hepato-pancreatic surgical resections can be improved by replacing the light box with an interactive 3D representation of the medical data in the operating theatre. A laptop serves as the client computer and an easy-to-use interface has been developed for the surgeon to interact with and interrogate the patient data. To date, 16 patients have had 3D reconstructions of their DICOM data sets, including preoperative interrogation and planning of surgery. Interrogation of the 3D images live in theatre and comparison with the surgeons' operative findings (including intraoperative ultrasound) led to the operation being abandoned in 25% of cases, adoption of an alternative surgical approach in 25% of cases, and helpful image guidance for successful resection in 50% of cases. The clinical value of the latest generation of scanners and digital imaging techniques cannot be realized unless appropriate dissemination of the images takes place. This project has succeeded in translating the image technology into a user-friendly form and delivers 3D reconstructions of patient-specific data to the "sharp end"-the surgeon undertaking the tumor resection in theatre, in a manner that allows interaction and interpretation. More time interrogating the 3D data sets preoperatively would help reduce the incidence of abandoned operations-this is part of the surgeons' learning curve. We have developed one of the first practical applications to benefit from remote visualization, and certainly the first medical visualization application of this kind.

  17. Accurate analysis and visualization of cardiac (11)C-PIB uptake in amyloidosis with semiautomatic software.

    PubMed

    Kero, Tanja; Lindsjö, Lars; Sörensen, Jens; Lubberink, Mark

    2016-08-01

    (11)C-PIB PET is a promising non-invasive diagnostic tool for cardiac amyloidosis. Semiautomatic analysis of PET data is now available but it is not known how accurate these methods are for amyloid imaging. The aim of this study was to evaluate the feasibility of one semiautomatic software tool for analysis and visualization of (11)C-PIB left ventricular retention index (RI) in cardiac amyloidosis. Patients with systemic amyloidosis and cardiac involvement (n = 10) and healthy controls (n = 5) were investigated with dynamic (11)C-PIB PET. Two observers analyzed the PET studies with semiautomatic software to calculate the left ventricular RI of (11)C-PIB and to create parametric images. The mean RI at 15-25 min from the semiautomatic analysis was compared with RI based on manual analysis and showed comparable values (0.056 vs 0.054 min(-1) for amyloidosis patients and 0.024 vs 0.025 min(-1) in healthy controls; P = .78) and the correlation was excellent (r = 0.98). Inter-reader reproducibility also was excellent (intraclass correlation coefficient, ICC > 0.98). Parametric polarmaps and histograms made visual separation of amyloidosis patients and healthy controls fast and simple. Accurate semiautomatic analysis of cardiac (11)C-PIB RI in amyloidosis patients is feasible. Parametric polarmaps and histograms make visual interpretation fast and simple.

  18. Conditionally prepared photon and quantum imaging

    NASA Astrophysics Data System (ADS)

    Lvovsky, Alexander I.; Aichele, Thomas

    2004-10-01

    We discuss a classical model allowing one to visualize and characterize the optical mode of the single photon generated by means of a conditional measurement on a biphoton produced in parametric down-conversion. The model is based on Klyshko's advanced wave interpretation, but extends beyond it, providing a precise mathematical description of the advanced wave. The optical mode of the conditional photon is shown to be identical to the mode of the classical difference-frequency field generated due to nonlinear interaction of the partially coherent advanced wave with the pump pulse. With this "nonlinear advanced wave model" most coherence properties of the conditional photon become manifest, which permits one to intuitively understand many recent results, in particular, in quantum imaging.

  19. Integrated software for the detection of epileptogenic zones in refractory epilepsy.

    PubMed

    Mottini, Alejandro; Miceli, Franco; Albin, Germán; Nuñez, Margarita; Ferrándo, Rodolfo; Aguerrebere, Cecilia; Fernandez, Alicia

    2010-01-01

    In this paper we present an integrated software designed to help nuclear medicine physicians in the detection of epileptogenic zones (EZ) by means of ictal-interictal SPECT and MR images. This tool was designed to be flexible, friendly and efficient. A novel detection method was included (A-contrario) along with the classical detection method (Subtraction analysis). The software's performance was evaluated with two separate sets of validation studies: visual interpretation of 12 patient images by an experimented observer and objective analysis of virtual brain phantom experiments by proposed numerical observers. Our results support the potential use of the proposed software to help nuclear medicine physicians in the detection of EZ in clinical practice.

  20. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    PubMed

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  1. Ultrasonography in diagnosing chronic pancreatitis: New aspects

    PubMed Central

    Dimcevski, Georg; Erchinger, Friedemann G; Havre, Roald; Gilja, Odd Helge

    2013-01-01

    The course and outcome is poor for most patients with pancreatic diseases. Advances in pancreatic imaging are important in the detection of pancreatic diseases at early stages. Ultrasonography as a diagnostic tool has made, virtually speaking a technical revolution in medical imaging in the new millennium. It has not only become the preferred method for first line imaging, but also, increasingly to clarify the interpretation of other imaging modalities to obtain efficient clinical decision. We review ultrasonography modalities, focusing on advanced pancreatic imaging and its potential to substantially improve diagnosis of pancreatic diseases at earlier stages. In the first section, we describe scanning techniques and examination protocols. Their consequences for image quality and the ability to obtain complete and detailed visualization of the pancreas are discussed. In the second section we outline ultrasonographic characteristics of pancreatic diseases with emphasis on chronic pancreatitis. Finally, new developments in ultrasonography of the pancreas such as contrast enhanced ultrasound and elastography are enlightened. PMID:24259955

  2. Single-phase dual-energy CT allows for characterization of renal masses as benign or malignant.

    PubMed

    Graser, Anno; Becker, Christoph R; Staehler, Michael; Clevert, Dirk A; Macari, Michael; Arndt, Niko; Nikolaou, Konstantin; Sommer, Wieland; Stief, Christian; Reiser, Maximilian F; Johnson, Thorsten R C

    2010-07-01

    To evaluate the diagnostic accuracy of dual-energy CT (DECT) in renal mass characterization using a single-phase acquisition. A total of 202 patients (148 males, 54 females; 63 +/- 13 years) with ultrasound-based suspicion of a renal mass underwent unenhanced single energy and nephrographic phase DECT on a dual source scanner (Siemens Somatom Definition Dual Source, n = 174; Somatom Definition Flash, n = 28). Scan parameters for DECT were: tube potential, 80/100 and 100/Sn140 kVp; exposure, 404/300 and 96/232 effective mAs; collimation, 14 x 1.2/32 x 0.6 mm. Two abdominal radiologists assessed DECT and SECT image quality and noise on a 5-point visual analogue scale. Using solely the DE acquisition including virtual nonenhanced (VNE) and color coded iodine images that enable direct visualization of iodine, masses were characterized as benign or malignant. In a second reading session after 34 to 72 (average: 55) days, the same assessment was again performed using both the true nonenhanced (TNE) and nephrographic phase scans thereby simulating conventional single-energy CT. Sensitivities, specificities, diagnostic accuracies, and interpretation times and were recorded for both reading paradigms. Dose reduction of a single-phase over a dual-phase protocol was calculated. Results were tested for statistical significance using the paired Wilcoxon signed rank test and student t test. Differences in sensitivities were tested for significance using the McNemar test. Of the 202 patients, 115 (56.9%) underwent surgical resection of renal masses. Histopathology showed malignancy in 99 and benign tumors in 18 patients, in 48 patients (23.7%), follow-up imaging showed size stability of lesions diagnosed as benign, and 37 patients (18.3%) had no mass. Based on DECT only, 95/99 (96.0%) patients with malignancy and 96/103 (93.2%) patients without malignancy were correctly identified, for an overall accuracy of 94.6%. The dual-phase approach identified 96/99 (97.0%) and 98/103 (95.1%), accuracy 96.0%, P > 0.05 for both. Mean interpretation time was 2.2 +/- 0.8 minutes for DECT, and 3.5 +/- 1.0 minutes for the dual-phase protocol, P < 0.001. Mean VNE/TNE image quality was 1.68 +/- 0.65/1.30 +/- 0.59, noise was 2.03 +/- 0.57/1.18 +/- 0.29, P < 0.001 for both. Omission of the true unenhanced phase lead to a 48.9 +/- 7.0% dose reduction. DECT allows for fast and accurate characterization of renal masses in a single-phase acquisition. Interpretation of color coded images significantly reduces interpretation time. Omission of a nonenhanced acquisition can reduce radiation exposure by almost 50%.

  3. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  4. Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.

    2017-03-01

    Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.

  5. iPhone-based teleradiology for the diagnosis of acute cervico-dorsal spine trauma.

    PubMed

    Modi, Jayesh; Sharma, Pranshu; Earl, Alex; Simpson, Mark; Mitchell, J Ross; Goyal, Mayank

    2010-11-01

    To assess the feasibility of iPhone-based teleradiology as a potential solution for the diagnosis of acute cervico-dorsal spine trauma. We have developed a solution that allows visualization of images on the iPhone. Our system allows rapid, remote, secure, visualization of medical images without storing patient data on the iPhone. This retrospective study is comprised of cervico-dorsal computed tomogram (CT) scan examination of 75 consecutive patients having clinically suspected cervico-dorsal spine fracture. Two radiologists reviewed CT scan images on the iPhone. Computed tomogram spine scans were analyzed for vertebral body fracture and posterior elements fractures, any associated subluxation-dislocation and cord lesion. The total time taken from the launch of viewing application on the iPhone until interpretation was recorded. The results were compared with that of a diagnostic workstation monitor. Inter-rater agreement was assessed. The sensitivity and accuracy of detecting vertebral body fractures was 80% and 97% by both readers using the iPhone system with a perfect inter-rater agreement (kappa:1). The sensitivity and accuracy of detecting posterior elements fracture was 75% and 98% for Reader 1 and 50% and 97% for Reader 2 using the iPhone. There was good inter-rater agreement (kappa: 0.66) between both readers. No statistically significant difference was noted between time on the workstation and the iPhone system. iPhone-based teleradiology system is accurate in the diagnosis of acute cervicodorsal spinal trauma. It allows rapid, remote, secure, visualization of medical images without storing patient data on the iPhone.

  6. [Heinrich Hoffmann's Der Struwwelpeter (1845/1859): a parody on the romantic cult of childhood].

    PubMed

    Wesseling, Lies

    2006-01-01

    This article analyzes the cultural dynamics of the construction and deconstruction of childhood images, by means of a case study of Heinrich Hoffmann's classic picture book, Der Struwwelpeter (1845/1859). Childhood images are the joint product of sciences (especially anthropology, pedagogy and developmental psychology) and arts (especially painting, photography and (children's) literature). These images are historically variable, because childhood is the permanent target of idealization and demystification. This article interprets Der Struwwelpeter as a demystication of Romantic idealizations of childhood as propounded by Romantic Naturphilosophie and, more specifically, the pedagogy of Friedrich Fröbel (1772-1852). In my view, this picture book satirizes the developmentalism and the pastoryl idyll which informed the Romantic image of childhood, through its verbal and visual components. As I argue at length, this satire directly bears upon leading scientific and political controversies of Hoffmann's time.

  7. [Anatomy of the skull base and the cranial nerves in slice imaging].

    PubMed

    Bink, A; Berkefeld, J; Zanella, F

    2009-07-01

    Computed tomography (CT) and magnetic resonance imaging (MRI) are suitable methods for examination of the skull base. Whereas CT is used to evaluate mainly bone destruction e.g. for planning surgical therapy, MRI is used to show pathologies in the soft tissue and bone invasion. High resolution and thin slice thickness are indispensible for both modalities of skull base imaging. Detailed anatomical knowledge is necessary even for correct planning of the examination procedures. This knowledge is a requirement to be able to recognize and interpret pathologies. MRI is the method of choice for examining the cranial nerves. The total path of a cranial nerve can be visualized by choosing different sequences taking into account the tissue surrounding this cranial nerve. This article summarizes examination methods of the skull base in CT and MRI, gives a detailed description of the anatomy and illustrates it with image examples.

  8. Variability of manual ciliary muscle segmentation in optical coherence tomography images.

    PubMed

    Chang, Yu-Cherng; Liu, Keke; Cabot, Florence; Yoo, Sonia H; Ruggeri, Marco; Ho, Arthur; Parel, Jean-Marie; Manns, Fabrice

    2018-02-01

    Optical coherence tomography (OCT) offers new options for imaging the ciliary muscle allowing direct in vivo visualization. However, variation in image quality along the length of the muscle prevents accurate delineation and quantification of the muscle. Quantitative analyses of the muscle are accompanied by variability in segmentation between examiners and between sessions for the same examiner. In processes such as accommodation where changes in muscle thickness may be tens of microns- the equivalent of a small number of image pixels, differences in segmentation can influence the magnitude and potentially the direction of thickness change. A detailed analysis of variability in ciliary muscle thickness measurements was performed to serve as a benchmark for the extent of this variability in studies on the ciliary muscle. Variation between sessions and examiners were found to be insignificant but the magnitude of variation should be considered when interpreting ciliary muscle results.

  9. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  10. a Cognitive Approach to Teaching a Graduate-Level Geobia Course

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel A.

    2016-06-01

    Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.

  11. Play dough as an educational tool for visualization of complicated cerebral aneurysm anatomy.

    PubMed

    Eftekhar, Behzad; Ghodsi, Mohammad; Ketabchi, Ebrahim; Ghazvini, Arman Rakan

    2005-05-10

    Imagination of the three-dimensional (3D) structure of cerebral vascular lesions using two-dimensional (2D) angiograms is one of the skills that neurosurgical residents should achieve during their training. Although ongoing progress in computer software and digital imaging systems has facilitated viewing and interpretation of cerebral angiograms enormously, these facilities are not always available. We have presented the use of play dough as an adjunct to the teaching armamentarium for training in visualization of cerebral aneurysms in some cases. The advantages of play dough are low cost, availability and simplicity of use, being more efficient and realistic in training the less experienced resident in comparison with the simple drawings and even angiographic views from different angles without the need for computers and similar equipment. The disadvantages include the psychological resistance of residents to the use of something in surgical training that usually is considered to be a toy, and not being as clean as drawings or computerized images. Although technology and computerized software using the patients' own imaging data seems likely to become more advanced in the future, use of play dough in some complicated cerebral aneurysm cases may be helpful in 3D reconstruction of the real situation.

  12. Pharmacological imaging as a tool to visualise dopaminergic neurotoxicity.

    PubMed

    Schrantee, A; Reneman, L

    2014-09-01

    Dopamine abnormalities underlie a wide variety of psychopathologies, including ADHD and schizophrenia. A new imaging technique, pharmacological magnetic resonance imaging (phMRI), is a promising non-invasive technique to visualize the dopaminergic system in the brain. In this review we explore the clinical potential of phMRI in detecting dopamine dysfunction or neurotoxicity, assess its strengths and weaknesses and identify directions for future research. Preclinically, phMRI is able to detect severe dopaminergic abnormalities quite similar to conventional techniques such as PET and SPECT. phMRI benefits from its high spatial resolution and the possibility to visualize both local and downstream effects of dopaminergic neurotransmission. In addition, it allows for repeated measurements and assessments in vulnerable populations. The major challenge is the complex interpretation of phMRI results. Future studies in patients with dopaminergic abnormalities need to confirm the currently reviewed preclinical findings to validate the technique in a clinical setting. Eventually, based on the current review we expect that phMRI can be of use in a clinical setting involving vulnerable populations (such as children and adolescents) for diagnosis and monitoring treatment efficacy. This article is part of the Special Issue Section entitled 'Neuroimaging in Neuropharmacology'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  14. Cassini/VIMS hyperspectral observations of the HUYGENS landing site on Titan

    USGS Publications Warehouse

    Rodriguez, S.; Le, Mouelic S.; Sotin, Christophe; Clenet, H.; Clark, R.N.; Buratti, B.; Brown, R.H.; McCord, T.B.; Nicholson, P.D.; Baines, K.H.

    2006-01-01

    Titan is one of the primary scientific objectives of the NASA-ESA-ASI Cassini-Huygens mission. Scattering by haze particles in Titan's atmosphere and numerous methane absorptions dramatically veil Titan's surface in the visible range, though it can be studied more easily in some narrow infrared windows. The Visual and Infrared Mapping Spectrometer (VIMS) instrument onboard the Cassini spacecraft successfully imaged its surface in the atmospheric windows, taking hyperspectral images in the range 0.4-5.2 ??m. On 26 October (TA flyby) and 13 December 2004 (TB flyby), the Cassini-Huygens mission flew over Titan at an altitude lower than 1200 km at closest approach. We report here on the analysis of VIMS images of the Huygens landing site acquired at TA and TB, with a spatial resolution ranging from 16 to14.4 km/pixel. The pure atmospheric backscattering component is corrected by using both an empirical method and a first-order theoretical model. Both approaches provide consistent results. After the removal of scattering, ratio images reveal subtle surface heterogeneities. A particularly contrasted structure appears in ratio images involving the 1.59 and 2.03 ??m images north of the Huygens landing site. Although pure water ice cannot be the only component exposed at Titan's surface, this area is consistent with a local enrichment in exposed water ice and seems to be consistent with DISR/Huygens images and spectra interpretations. The images show also a morphological structure that can be interpreted as a 150 km diameter impact crater with a central peak. ?? 2006 Elsevier Ltd. All rights reserved.

  15. CNNEDGEPOT: CNN based edge detection of 2D near surface potential field data

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2012-09-01

    All anomalies are important in the interpretation of gravity and magnetic data because they indicate some important structural features. One of the advantages of using gravity or magnetic data for searching contacts is to be detected buried structures whose signs could not be seen on the surface. In this paper, a general view of the cellular neural network (CNN) method with a large scale nonlinear circuit is presented focusing on its image processing applications. The proposed CNN model is used consecutively in order to extract body and body edges. The algorithm is a stochastic image processing method based on close neighborhood relationship of the cells and optimization of A, B and I matrices entitled as cloning template operators. Setting up a CNN (continues time cellular neural network (CTCNN) or discrete time cellular neural network (DTCNN)) for a particular task needs a proper selection of cloning templates which determine the dynamics of the method. The proposed algorithm is used for image enhancement and edge detection. The proposed method is applied on synthetic and field data generated for edge detection of near-surface geological bodies that mask each other in various depths and dimensions. The program named as CNNEDGEPOT is a set of functions written in MATLAB software. The GUI helps the user to easily change all the required CNN model parameters. A visual evaluation of the outputs due to DTCNN and CTCNN are carried out and the results are compared with each other. These examples demonstrate that in detecting the geological features the CNN model can be used for visual interpretation of near surface gravity or magnetic anomaly maps.

  16. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format

    PubMed Central

    Ahmed, Zeeshan; Dandekar, Thomas

    2018-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool ‘Mining Scientific Literature (MSL)’, which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system’s output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format. PMID:29721305

  17. Incorporating 3-dimensional models in online articles.

    PubMed

    Cevidanes, Lucia H S; Ruellas, Antonio C O; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-05-01

    The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  18. Learning semantic and visual similarity for endomicroscopy video retrieval.

    PubMed

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.

  19. Accuracy of fluorodeoxyglucose-PET imaging for differentiating benign from malignant pleural effusions: a meta-analysis.

    PubMed

    Porcel, José M; Hernández, Paula; Martínez-Alonso, Montserrat; Bielsa, Silvia; Salud, Antonieta

    2015-02-01

    The role of fluorodeoxyglucose (FDG)-PET imaging for diagnosing malignant pleural effusions is not well defined. The aim of this study was to summarize the evidence for its use in ruling in or out the malignant origin of a pleural effusion or thickening. A meta-analysis was conducted of diagnostic accuracy studies published in the Cochrane Library, PubMed, and Embase (inception to June 2013) without language restrictions. Two investigators selected studies that had evaluated the performance of FDG-PET imaging in patients with pleural effusions or thickening, using pleural cytopathology or histopathology as the reference standard for malignancy. Subgroup analyses were conducted according to FDG-PET imaging interpretation (qualitative or semiquantitative), PET imaging equipment (PET vs integrated PET-CT imaging), and/or target population (known lung cancer or malignant pleural mesothelioma). Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2. We used a bivariate random-effects model for the analysis and pooling of diagnostic performance measures across studies. Fourteen non-high risk of bias studies, comprising 407 patients with malignant and 232 with benign pleural conditions, met the inclusion criteria. Semiquantitative PET imaging readings had a significantly lower sensitivity for diagnosing malignant effusions than visual assessments (82% vs 91%; P = .026). The pooled test characteristics of integrated PET-CT imaging systems using semiquantitative interpretations for identifying malignant effusions were: sensitivity, 81%; specificity, 74%; positive likelihood ratio (LR), 3.22; negative LR, 0.26; and area under the curve, 0.838. Resultant data were heterogeneous, and spectrum bias should be considered when appraising FDG-PET imaging operating characteristics. The moderate accuracy of PET-CT imaging using semiquantitative readings precludes its routine recommendation for discriminating malignant from benign pleural effusions.

  20. Improving visual observation skills through the arts to aid radiographic interpretation in veterinary practice: A pilot study.

    PubMed

    Beck, Cathy; Gaunt, Heather; Chiavaroli, Neville

    2017-09-01

    Radiographic interpretation is a perceptual and cognitive skill. Recently core veterinary radiology textbooks have focused on the cognitive (i.e., the clinical aspects of radiographic interpretation) rather than the features of visual observation that improve identification of abnormalities. As a result, the skill of visual observation is underemphasized and thus often underdeveloped by trainees. The study of the arts in medical education has been used to train and improve visual observation and empathy. The use of the arts to improve visual observation skills in Veterinary Science has not been previously described. Objectives of this pilot study were to adapt the existing Visual Arts in Health Education Program for medical and dental students at the University of Melbourne, Australia to third year Doctor of Veterinary Medicine students and evaluate their perceptions regarding the program's effects on visual observation skills and confidence with respect to radiographic interpretation. This adaptation took the form of a single seminar given to third year Doctor of Veterinary Medicine students. Following the seminar, students reported an improved approach to radiographic interpretation and felt they had gained skills which would assist them throughout their career. In the year following the seminar, written reports of the students who attended the seminar were compared with reports from a matched cohort of students who did not attend the seminar. This demonstrated increased identification of abnormalities and greater description of the abnormalities identified. Findings indicated that explicit training in visual observation may be a valuable adjunct to the radiology training of Doctor of Veterinary Medicine students. © 2017 American College of Veterinary Radiology.

  1. Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour.

    PubMed

    Fraccaro, Paolo; Vigo, Markel; Balatsoukas, Panagiotis; van der Veer, Sabine N; Hassan, Lamiece; Williams, Richard; Wood, Grahame; Sinha, Smeeta; Buchan, Iain; Peek, Niels

    2018-02-12

    Patient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patients' abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour. We conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patients' risk interpretation, and employed eye tracking to assess and compare visual search behaviour. Misinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition. This study confirms patients' difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety concerns and may limit the potential of patient portals to actively involve patients in their own healthcare.

  2. Programmable Remapper with Single Flow Architecture

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E. (Inventor)

    1993-01-01

    An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.

  3. Workflow Dynamics and the Imaging Value Chain: Quantifying the Effect of Designating a Nonimage-Interpretive Task Workflow.

    PubMed

    Lee, Matthew H; Schemmel, Andrew J; Pooler, B Dustin; Hanley, Taylor; Kennedy, Tabassum A; Field, Aaron S; Wiegmann, Douglas; Yu, John-Paul J

    To assess the impact of separate non-image interpretive task and image-interpretive task workflows in an academic neuroradiology practice. A prospective, randomized, observational investigation of a centralized academic neuroradiology reading room was performed. The primary reading room fellow was observed over a one-month period using a time-and-motion methodology, recording frequency and duration of tasks performed. Tasks were categorized into separate image interpretive and non-image interpretive workflows. Post-intervention observation of the primary fellow was repeated following the implementation of a consult assistant responsible for non-image interpretive tasks. Pre- and post-intervention data were compared. Following separation of image-interpretive and non-image interpretive workflows, time spent on image-interpretive tasks by the primary fellow increased from 53.8% to 73.2% while non-image interpretive tasks decreased from 20.4% to 4.4%. Mean time duration of image interpretation nearly doubled, from 05:44 to 11:01 (p = 0.002). Decreases in specific non-image interpretive tasks, including phone calls/paging (2.86/hr versus 0.80/hr), in-room consultations (1.36/hr versus 0.80/hr), and protocoling (0.99/hr versus 0.10/hr), were observed. The consult assistant experienced 29.4 task switching events per hour. Rates of specific non-image interpretive tasks for the CA were 6.41/hr for phone calls/paging, 3.60/hr for in-room consultations, and 3.83/hr for protocoling. Separating responsibilities into NIT and IIT workflows substantially increased image interpretation time and decreased TSEs for the primary fellow. Consolidation of NITs into a separate workflow may allow for more efficient task completion. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. How does c-view image quality compare with conventional 2D FFDM?

    PubMed

    Nelson, Jeffrey S; Wells, Jered R; Baker, Jay A; Samei, Ehsan

    2016-05-01

    The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-view images failed to meet the nominal minimum ACR accreditation requirements-primarily due to fiber breaks. Software analysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-view image (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-view image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Their analysis demonstrates many instances where the c-view image quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-view images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.

  5. Study on identifying deciduous forest by the method of feature space transformation

    NASA Astrophysics Data System (ADS)

    Zhang, Xuexia; Wu, Pengfei

    2009-10-01

    The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.

  6. Integration of Satellite Tracking Data and Satellite Images for Detailed Characteristics of Wildlife Habitats

    NASA Astrophysics Data System (ADS)

    Dobrynin, D. V.; Rozhnov, V. V.; Saveliev, A. A.; Sukhova, O. V.; Yachmennikova, A. A.

    2017-12-01

    Methods of analysis of the results got from satellite tracking of large terrestrial mammals differ in the level of their integration with additional geographic data. The reliable fine-scale cartographic basis for assessing specific wildlife habitats can be developed through the interpretation of multispectral remote sensing data and extrapolation of the results to the entire estimated species range. Topographic maps were ordinated according to classified features using self-organizing maps (Kohonen's SOM). The satellite image of the Ussuriiskyi Nature Reserve area was interpreted for the analysis of movement conditions for seven wild Amur tigers ( Panthera tigris altaica) equipped with GPS collars. 225 SOM classes for cartographic visualization are sufficient for the detailed mapping of all natural complexes that were identified as a result of interpretation. During snow-free periods, tigers preferred deciduous and shrub associations at lower elevations, as well as mixed forests in the valleys of streams that are adjacent to sparse forests and shrub watershed in the mountain ranges; during heavy snow periods, the animals preferred the entire range of plant communities in different relief types, except for open sites in meadows and abandoned fields at foothills. The border zones of different biotopes were typically used by the tigers during all seasons. Amur tigers preferred coniferous forests for long-term movements.

  7. Quantitative inference of population response properties across eccentricity from motion-induced maps in macaque V1

    PubMed Central

    Chen, Ming; Wu, Si; Lu, Haidong D.; Roe, Anna W.

    2013-01-01

    Interpreting population responses in the primary visual cortex (V1) remains a challenge especially with the advent of techniques measuring activations of large cortical areas simultaneously with high precision. For successful interpretation, a quantitatively precise model prediction is of great importance. In this study, we investigate how accurate a spatiotemporal filter (STF) model predicts average response profiles to coherently drifting random dot motion obtained by optical imaging of intrinsic signals in V1 of anesthetized macaques. We establish that orientation difference maps, obtained by subtracting orthogonal axis-of-motion, invert with increasing drift speeds, consistent with the motion streak effect. Consistent with perception, the speed at which the map inverts (the critical speed) depends on cortical eccentricity and systematically increases from foveal to parafoveal. We report that critical speeds and response maps to drifting motion are excellently reproduced by the STF model. Our study thus suggests that the STF model is quantitatively accurate enough to be used as a first model of choice for interpreting responses obtained with intrinsic imaging methods in V1. We show further that this good quantitative correspondence opens the possibility to infer otherwise not easily accessible population receptive field properties from responses to complex stimuli, such as drifting random dot motions. PMID:23197457

  8. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  9. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  10. Analyser-based mammography using single-image reconstruction.

    PubMed

    Briedis, Dahliyani; Siu, Karen K W; Paganin, David M; Pavlov, Konstantin M; Lewis, Rob A

    2005-08-07

    We implement an algorithm that is able to decode a single analyser-based x-ray phase-contrast image of a sample, converting it into an equivalent conventional absorption-contrast radiograph. The algorithm assumes the projection approximation for x-ray propagation in a single-material object embedded in a substrate of approximately uniform thickness. Unlike the phase-contrast images, which have both directional bias and a bias towards edges present in the sample, the reconstructed images are directly interpretable in terms of the projected absorption coefficient of the sample. The technique was applied to a Leeds TOR[MAM] phantom, which is designed to test mammogram quality by the inclusion of simulated microcalcifications, filaments and circular discs. This phantom was imaged at varying doses using three modalities: analyser-based synchrotron phase-contrast images converted to equivalent absorption radiographs using our algorithm, slot-scanned synchrotron imaging and imaging using a conventional mammography unit. Features in the resulting images were then assigned a quality score by volunteers. The single-image reconstruction method achieved higher scores at equivalent and lower doses than the conventional mammography images, but no improvement of visualization of the simulated microcalcifications, and some degradation in image quality at reduced doses for filament features.

  11. Measurement of irrigated acreage in Western Kansas from LANDSAT images

    NASA Astrophysics Data System (ADS)

    Keene, K. M.; Conley, C. D.

    1980-03-01

    In the past four decades, irrigated acreage in western Kansas has increased rapidly. Optimum utilization of vital groundwater supplies requires implementation of long-term water-management programs. One important variable in such programs is up-to-date information on acreage under irrigation. Conventional ground survey methods of estimating irrigated acreage are too slow to be of maximum use in water-management programs. Visual interpretation of LANDSAT images permits more rapid measurement of irrigated acreage, but procedures are tedious and still relatively slow. For example, using a LANDSAT false-color composite image in areas of western Kansas with few landmarks, it is impossible to keep track of fields by examination under low-power microscope. Irrigated fields are more easily delineated on a photographically enlarged false-color composite and are traced on an overlay for measurement. Interpretation and measurement required 6 weeks for a four-county (3140 mi2, 8133 km2) test area. Video image-analysis equipment permits rapid measurement of irrigated acreage. Spectral response of irrigated summer crops in western Kansas on MSS band 5 (visible red, 0.6-0.7 μm) images is low in contrast to high response from harvested and fallow fields and from common soil types. Therefore, irrigated acreage in western Kansas can be uniquely discriminated by video image analysis. The area of irrigated crops in a given area of view is measured directly. Sources of error are small in western Kansas. After preliminary preparation of the images, the time required to measure irrigated acreage was 1 h per county (average area, 876 ml2 or 2269 km2).

  12. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  13. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  14. The “Ice Age” of Anatomy and Obstetrics:

    PubMed Central

    Al-Gailani, Salim

    2016-01-01

    summary In the late nineteenth century anatomists claimed a new technique—slicing frozen corpses into sections—translated the three-dimensional complexity of the human body into flat, visually striking, and unprecedentedly accurate images. Traditionally hostile to visual aids, elite anatomists controversially claimed frozen sections had replaced dissection as the “true anatomy.” Some obstetricians adopted frozen sectioning to challenge anatomists’ authority and reform how clinicians made and used pictures. To explain the successes and failures of the technique, this article reconstructs the debates through which practitioners learned to make and interpret, to promote or denigrate frozen sections in teaching and research. Focusing on Britain, the author shows that attempts to introduce frozen sectioning into anatomy and obstetrics shaped and were shaped by negotiations over the epistemological standing of hand and eye in medicine.

  15. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  16. Live dynamic imaging and analysis of developmental cardiac defects in mouse models with optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lopez, Andrew L.; Wang, Shang; Garcia, Monica; Valladolid, Christian; Larin, Kirill V.; Larina, Irina V.

    2015-03-01

    Understanding mouse embryonic development is an invaluable resource for our interpretation of normal human embryology and congenital defects. Our research focuses on developing methods for live imaging and dynamic characterization of early embryonic development in mouse models of human diseases. Using multidisciplinary methods: optical coherence tomography (OCT), live mouse embryo manipulations and static embryo culture, molecular biology, advanced image processing and computational modeling we aim to understand developmental processes. We have developed an OCT based approach to image live early mouse embryos (E8.5 - E9.5) cultured on an imaging stage and visualize developmental events with a spatial resolution of a few micrometers (less than the size of an individual cell) and a frame rate of up to hundreds of frames per second and reconstruct cardiodynamics in 4D (3D+time). We are now using these methods to study how specific embryonic lethal mutations affect cardiac morphology and function during early development.

  17. Geothermal Prospecting with Remote Sensing and Geographical Information System Technologies in Xilingol Volcanic Field in the Eastern Inner Mongolia, NE China

    NASA Astrophysics Data System (ADS)

    Peng, F.; Huang, S.; Xiong, Y.; Zhao, Y.; Cheng, Y.

    2013-05-01

    Geothermal energy is a renewable and low-carbon energy source independent of climate change. It is most abundant in Cenozoic volcanic areas where high temperature can be obtained within a relatively shallow depth. Like other geological resources, geothermal resource prospecting and exploration require a good understanding of the host media. Remote sensing (RS) has the advantages of high spatial and temporal resolution and broad spatial coverage over the conventional geological and geophysical prospecting, while geographical information system (GIS) has intuitive, flexible, and convenient characteristics. In this study, we apply RS and GIS technics in prospecting the geothermal energy potential in Xilingol, a Cenozoic volcanic field in the eastern Inner Mongolia, NE China. Landsat TM/ETM+ multi-temporal images taken under clear-sky conditions, digital elevation model (DEM) data, and other auxiliary data including geological maps of 1:2,500,000 and 1:200,000 scales are used in this study. The land surface temperature (LST) of the study area is retrieved from the Landsat images with the single-channel algorithm on the platform of ENVI developed by ITT Visual Information Solutions. Information of linear and circular geological structure is then extracted from the LST maps and compared to the existing geological data. Several useful technologies such as principal component analysis (PCA), vegetation suppression technique, multi-temporal comparative analysis, and 3D Surface View based on DEM data are used to further enable a better visual geologic interpretation with the Landsat imagery of Xilingol. The Preliminary results show that major faults in the study area are mainly NE and NNE oriented. Several major volcanism controlling faults and Cenozoic volcanic eruption centers have been recognized from the linear and circular structures in the remote images. Seven areas have been identified as potential targets for further prospecting geothermal energy based on the visual interpretation of the geological structures. The study shows that GIS and RS have great application potential in the geothermal exploration in volcanic areas and will promote the exploration of renewable energy resources of great potential.

  18. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  19. ANATOMICAL STUDY OF CRANIAL NERVE EMERGENCE AND SKULL FORAMINA IN THE HORSE USING MAGNETIC RESONANCE IMAGING AND COMPUTED TOMOGRAPHY.

    PubMed

    Gonçalves, Rita; Malalana, Fernando; McConnell, James Fraser; Maddox, Thomas

    2015-01-01

    For accurate interpretation of magnetic resonance (MR) images of the equine brain, knowledge of the normal cross-sectional anatomy of the brain and associated structures (such as the cranial nerves) is essential. The purpose of this prospective cadaver study was to describe and compare MRI and computed tomography (CT) anatomy of cranial nerves' origins and associated skull foramina in a sample of five horses. All horses were presented for euthanasia for reasons unrelated to the head. Heads were collected posteuthanasia and T2-weighted MR images were obtained in the transverse, sagittal, and dorsal planes. Thin-slice MR sequences were also acquired using transverse 3D-CISS sequences that allowed mutliplanar reformatting. Transverse thin-slice CT images were acquired and multiplanar reformatting was used to create comparative images. Magnetic resonance imaging consistently allowed visualization of cranial nerves II, V, VII, VIII, and XII in all horses. The cranial nerves III, IV, and VI were identifiable as a group despite difficulties in identification of individual nerves. The group of cranial nerves IX, X, and XI were identified in 4/5 horses although the region where they exited the skull was identified in all cases. The course of nerves II and V could be followed on several slices and the main divisions of cranial nerve V could be distinguished in all cases. In conclusion, CT allowed clear visualization of the skull foramina and occasionally the nerves themselves, facilitating identification of the nerves for comparison with MRI images. © 2015 American College of Veterinary Radiology.

  20. Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding

    PubMed Central

    Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S

    2011-01-01

    One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193

  1. When hawks attack: animal-borne video studies of goshawk pursuit and prey-evasion strategies

    PubMed Central

    Kane, Suzanne Amador; Fulton, Andrew H.; Rosenthal, Lee J.

    2015-01-01

    Video filmed by a camera mounted on the head of a Northern Goshawk (Accipiter gentilis) was used to study how the raptor used visual guidance to pursue prey and land on perches. A combination of novel image analysis methods and numerical simulations of mathematical pursuit models was used to determine the goshawk's pursuit strategy. The goshawk flew to intercept targets by fixing the prey at a constant visual angle, using classical pursuit for stationary prey, lures or perches, and usually using constant absolute target direction (CATD) for moving prey. Visual fixation was better maintained along the horizontal than vertical direction. In some cases, we observed oscillations in the visual fix on the prey, suggesting that the goshawk used finite-feedback steering. Video filmed from the ground gave similar results. In most cases, it showed goshawks intercepting prey using a trajectory consistent with CATD, then turning rapidly to attack by classical pursuit; in a few cases, it showed them using curving non-CATD trajectories. Analysis of the prey's evasive tactics indicated that only sharp sideways turns caused the goshawk to lose visual fixation on the prey, supporting a sensory basis for the surprising frequency and effectiveness of this tactic found by previous studies. The dynamics of the prey's looming image also suggested that the goshawk used a tau-based interception strategy. We interpret these results in the context of a concise review of pursuit–evasion in biology, and conjecture that some prey deimatic ‘startle’ displays may exploit tau-based interception. PMID:25609783

  2. Hierarchical neural network model of the visual system determining figure/ground relation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  3. Flow visualization in superfluid helium-4 using He2 molecular tracers

    NASA Astrophysics Data System (ADS)

    Guo, Wei

    Flow visualization in superfluid helium is challenging, yet crucial for attaining a detailed understanding of quantum turbulence. Two problems have impeded progress: finding and introducing suitable tracers that are small yet visible; and unambiguous interpretation of the tracer motion. We show that metastable He2 triplet molecules are outstanding tracers compared with other particles used in helium. These molecular tracers have small size and relatively simple behavior in superfluid helium: they follow the normal fluid motion at above 1 K and will bind to quantized vortex lines below about 0.6 K. A laser-induced fluorescence technique has been developed for imaging the He2 tracers. We will present our recent experimental work on studying the normal-fluid motion by tracking thin lines of He2 tracers created via femtosecond laser-field ionization in helium. We will also discuss a newly launched experiment on visualizing vortex lines in a magnetically levitated superfluid helium drop by imaging the He2 tracers trapped on the vortex cores. This experiment will enable unprecedented insight into the behavior of a rotating superfluid drop and will untangle several key issues in quantum turbulence research. We acknowledge the support from the National Science Foundation under Grant No. DMR-1507386 and the US Department of Energy under Grant No. DE-FG02 96ER40952.

  4. A 3D-printed anatomical pancreas and kidney phantom for optimizing SPECT/CT reconstruction settings in beta cell imaging using 111In-exendin.

    PubMed

    Woliner-van der Weg, Wietske; Deden, Laura N; Meeuwis, Antoi P W; Koenrades, Maaike; Peeters, Laura H C; Kuipers, Henny; Laanstra, Geert Jan; Gotthardt, Martin; Slump, Cornelis H; Visser, Eric P

    2016-12-01

    Quantitative single photon emission computed tomography (SPECT) is challenging, especially for pancreatic beta cell imaging with 111 In-exendin due to high uptake in the kidneys versus much lower uptake in the nearby pancreas. Therefore, we designed a three-dimensionally (3D) printed phantom representing the pancreas and kidneys to mimic the human situation in beta cell imaging. The phantom was used to assess the effect of different reconstruction settings on the quantification of the pancreas uptake for two different, commercially available software packages. 3D-printed, hollow pancreas and kidney compartments were inserted into the National Electrical Manufacturers Association (NEMA) NU2 image quality phantom casing. These organs and the background compartment were filled with activities simulating relatively high and low pancreatic 111 In-exendin uptake for, respectively, healthy humans and type 1 diabetes patients. Images were reconstructed using Siemens Flash 3D and Hermes Hybrid Recon, with varying numbers of iterations and subsets and corrections. Images were visually assessed on homogeneity and artefacts, and quantitatively by the pancreas-to-kidney activity concentration ratio. Phantom images were similar to clinical images and showed comparable artefacts. All corrections were required to clearly visualize the pancreas. Increased numbers of subsets and iterations improved the quantitative performance but decreased homogeneity both in the pancreas and the background. Based on the phantom analyses, the Hybrid Recon reconstruction with 6 iterations and 16 subsets was found to be most suitable for clinical use. This work strongly contributed to quantification of pancreatic 111 In-exendin uptake. It showed how clinical images of 111 In-exendin can be interpreted and enabled selection of the most appropriate protocol for clinical use.

  5. Utilizing visual art to enhance the clinical observation skills of medical students.

    PubMed

    Jasani, Sona K; Saks, Norma S

    2013-07-01

    Clinical observation is fundamental in practicing medicine, but these skills are rarely taught. Currently no evidence-based exercises/courses exist for medical student training in observation skills. The goal was to develop and teach a visual arts-based exercise for medical students, and to evaluate its usefulness in enhancing observation skills in clinical diagnosis. A pre- and posttest and evaluation survey were developed for a three-hour exercise presented to medical students just before starting clerkships. Students were provided with questions to guide discussion of both representational and non-representational works of art. Quantitative analysis revealed that the mean number of observations between pre- and posttests was not significantly different (n=70: 8.63 vs. 9.13, p=0.22). Qualitative analysis of written responses identified four themes: (1) use of subjective terminology, (2) scope of interpretations, (3) speculative thinking, and (4) use of visual analogies. Evaluative comments indicated that students felt the exercise enhanced both mindfulness and skills. Using visual art images with guided questions can train medical students in observation skills. This exercise can be replicated without specially trained personnel or art museum partnerships.

  6. Instruction-Based Clinical Eye-Tracking Study on the Visual Interpretation of Divergence: How Do Students Look at Vector Field Plots?

    ERIC Educational Resources Information Center

    Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.

    2018-01-01

    Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux…

  7. A systematic review of visual image theory, assessment, and use in skin cancer and tanning research.

    PubMed

    McWhirter, Jennifer E; Hoffman-Goetz, Laurie

    2014-01-01

    Visual images increase attention, comprehension, and recall of health information and influence health behaviors. Health communication campaigns on skin cancer and tanning often use visual images, but little is known about how such images are selected or evaluated. A systematic review of peer-reviewed, published literature on skin cancer and tanning was conducted to determine (a) what visual communication theories were used, (b) how visual images were evaluated, and (c) how visual images were used in the research studies. Seven databases were searched (PubMed/MEDLINE, EMBASE, PsycINFO, Sociological Abstracts, Social Sciences Full Text, ERIC, and ABI/INFORM) resulting in 5,330 citations. Of those, 47 met the inclusion criteria. Only one study specifically identified a visual communication theory guiding the research. No standard instruments for assessing visual images were reported. Most studies lacked, to varying degrees, comprehensive image description, image pretesting, full reporting of image source details, adequate explanation of image selection or development, and example images. The results highlight the need for greater theoretical and methodological attention to visual images in health communication research in the future. To this end, the authors propose a working definition of visual health communication.

  8. Cortical connective field estimates from resting state fMRI activity.

    PubMed

    Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W

    2014-01-01

    One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.

  9. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    NASA Astrophysics Data System (ADS)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  10. Enhancement of PET Images

    NASA Astrophysics Data System (ADS)

    Davis, Paul B.; Abidi, Mongi A.

    1989-05-01

    PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.

  11. Polarimetric imaging of biological tissues based on the indices of polarimetric purity.

    PubMed

    Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C; Moreno, Ignacio; Campos, Juan

    2018-04-01

    We highlight the interest of using the indices of polarimetric purity (IPPs) to the inspection of biological tissues. The IPPs were recently proposed in the literature and they result in a further synthetization of the depolarizing properties of samples. Compared with standard polarimetric images of biological samples, IPP-based images lead to larger image contrast of some biological structures and to a further physical interpretation of the depolarizing mechanisms inherent to the samples. In addition, unlike other methods, their calculation do not require advanced algebraic operations (as is the case of polar decompositions), and they result in 3 indicators of easy implementation. We also propose a pseudo-colored encoding of the IPP information that leads to an improved visualization of samples. This last technique opens the possibility of tailored adjustment of tissues contrast by using customized pseudo-colored images. The potential of the IPP approach is experimentally highlighted along the manuscript by studying 3 different ex-vivo samples. A significant image contrast enhancement is obtained by using the IPP-based methods, compared to standard polarimetric images. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. System Description and First Application of an FPGA-Based Simultaneous Multi-Frequency Electrical Impedance Tomography

    PubMed Central

    Aguiar Santos, Susana; Robens, Anne; Boehm, Anna; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    A new prototype of a multi-frequency electrical impedance tomography system is presented. The system uses a field-programmable gate array as a main controller and is configured to measure at different frequencies simultaneously through a composite waveform. Both real and imaginary components of the data are computed for each frequency and sent to the personal computer over an ethernet connection, where both time-difference imaging and frequency-difference imaging are reconstructed and visualized. The system has been tested for both time-difference and frequency-difference imaging for diverse sets of frequency pairs in a resistive/capacitive test unit and in self-experiments. To our knowledge, this is the first work that shows preliminary frequency-difference images of in-vivo experiments. Results of time-difference imaging were compared with simulation results and shown that the new prototype performs well at all frequencies in the tested range of 60 kHz–960 kHz. For frequency-difference images, further development of algorithms and an improved normalization process is required to correctly reconstruct and interpreted the resulting images. PMID:27463715

  13. Improved reconstruction and sensing techniques for personnel screening in three-dimensional cylindrical millimeter-wave portal scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandes, Justin L.; Rappaport, Carey M.; Sheen, David M.

    2011-05-01

    The cylindrical millimeter-wave imaging technique, developed at Pacific Northwest National Laboratory (PNNL) and commercialized by L-3 Communications/Safeview in the ProVision system, is currently being deployed in airports and other high security locations to meet person-borne weapon and explosive detection requirements. While this system is efficient and effective in its current form, there are a number of areas in which the detection performance may be improved through using different reconstruction algorithms and sensing configurations. PNNL and Northeastern University have teamed together to investigate higher-order imaging artifacts produced by the current cylindrical millimeter-wave imaging technique using full-wave forward modeling and laboratory experimentation.more » Based on imaging results and scattered field visualizations using the full-wave forward model, a new imaging system is proposed. The new system combines a multistatic sensor configuration with the generalized synthetic aperture focusing technique (GSAFT). Initial results show an improved ability to image in areas of the body where target shading, specular and higher-order reflections cause images produced by the monostatic system difficult to interpret.« less

  14. Remote sensing: a tool for park planning and management

    USGS Publications Warehouse

    Draeger, William C.; Pettinger, Lawrence R.

    1981-01-01

    Remote sensing may be defined as the science of imaging or measuring objects from a distance. More commonly, however, the term is used in reference to the acquisition and use of photographs, photo-like images, and other data acquired from aircraft and satellites. Thus, remote sensing includes the use of such diverse materials as photographs taken by hand from a light aircraft, conventional aerial photographs obtained with a precision mapping camera, satellite images acquired with sophisticated scanning devices, radar images, and magnetic and gravimetric data that may not even be in image form. Remotely sensed images may be color or black and white, can vary in scale from those that cover only a few hectares of the earth's surface to those that cover tens of thousands of square kilometers, and they may be interpreted visually or with the assistance of computer systems. This article attempts to describe several of the commonly available types of remotely sensed data, to discuss approaches to data analysis, and to demonstrate (with image examples) typical applications that might interest managers of parks and natural areas.

  15. HSI-Find: A Visualization and Search Service for Terascale Spectral Image Catalogs

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Smith, A. T.; Castano, R.; Palmer, E. E.; Xing, Z.

    2013-12-01

    Imaging spectrometers are remote sensing instruments commonly deployed on aircraft and spacecraft. They provide surface reflectance in hundreds of wavelength channels, creating data cubes known as hyperspecrtral images. They provide rich compositional information making them powerful tools for planetary and terrestrial science. These data products can be challenging to interpret because they contain datapoints numbering in the thousands (Dawn VIR) or millions (AVIRIS-C). Cross-image studies or exploratory searches involving more than one scene are rare; data volumes are often tens of GB per image and typical consumer-grade computers cannot store more than a handful of images in RAM. Visualizing the information in a single scene is challenging since the human eye can only distinguish three color channels out of the hundreds available. To date, analysis has been performed mostly on single images using purpose-built software tools that require extensive training and commercial licenses. The HSIFind software suite provides a scalable distributed solution to the problem of visualizing and searching large catalogs of spectral image data. It consists of a RESTful web service that communicates to a javascript-based browser client. The software provides basic visualization through an intuitive visual interface, allowing users with minimal training to explore the images or view selected spectra. Users can accumulate a library of spectra from one or more images and use these to search for similar materials. The result appears as an intensity map showing the extent of a spectral feature in a scene. Continuum removal can isolate diagnostic absorption features. The server-side mapping algorithm uses an efficient matched filter algorithm that can process a megapixel image cube in just a few seconds. This enables real-time interaction, leading to a new way of interacting with the data: the user can launch a search with a single mouse click and see the resulting map in seconds. This allows the user to quickly explore each image, ascertain the main units of surface material, localize outliers, and develop an understanding of the various materials' spectral characteristics. The HSIFind software suite is currently in beta testing at the Planetary Science Institute and a process is underway to release it under an open source license to the broader community. We believe it will benefit instrument operations during remote planetary exploration, where tactical mission decisions demand rapid analysis of each new dataset. The approach also holds potential for public spectral catalogs where its shallow learning curve and portability can make these datasets accessible to a much wider range of researchers. Acknowledgements: The HSIFind project acknowledges the NASA Advanced MultiMission Operating System (AMMOS) and the Multimission Ground Support Services (MGSS). E. Palmer is with the Planetary Science Institute, Tucson, AZ. Other authors are with the Jet Propulsion Laboratory, Pasadena, CA. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Copyright 2013, California Institute of Technology.

  16. Performance analysis of automated evaluation of Crithidia luciliae-based indirect immunofluorescence tests in a routine setting - strengths and weaknesses.

    PubMed

    Hormann, Wymke; Hahn, Melanie; Gerlach, Stefan; Hochstrate, Nicola; Affeldt, Kai; Giesen, Joyce; Fechner, Kai; Damoiseaux, Jan G M C

    2017-11-27

    Antibodies directed against dsDNA are a highly specific diagnostic marker for the presence of systemic lupus erythematosus and of particular importance in its diagnosis. To assess anti-dsDNA antibodies, the Crithidia luciliae-based indirect immunofluorescence test (CLIFT) is one of the assays considered to be the best choice. To overcome the drawback of subjective result interpretation that inheres indirect immunofluorescence assays in general, automated systems have been introduced into the market during the last years. Among these systems is the EUROPattern Suite, an advanced automated fluorescence microscope equipped with different software packages, capable of automated pattern interpretation and result suggestion for ANA, ANCA and CLIFT analysis. We analyzed the performance of the EUROPattern Suite with its automated fluorescence interpretation for CLIFT in a routine setting, reflecting the everyday life of a diagnostic laboratory. Three hundred and twelve consecutive samples were collected, sent to the Central Diagnostic Laboratory of the Maastricht University Medical Centre with a request for anti-dsDNA analysis over a period of 7 months. Agreement between EUROPattern assay analysis and the visual read was 93.3%. Sensitivity and specificity were 94.1% and 93.2%, respectively. The EUROPattern Suite performed reliably and greatly supported result interpretation. Automated image acquisition is readily performed and automated image classification gives a reliable recommendation for assay evaluation to the operator. The EUROPattern Suite optimizes workflow and contributes to standardization between different operators or laboratories.

  17. Going beyond a First Reader: A Machine Learning Methodology for Optimizing Cost and Performance in Breast Ultrasound Diagnosis.

    PubMed

    Venkatesh, Santosh S; Levenback, Benjamin J; Sultan, Laith R; Bouzghar, Ghizlane; Sehgal, Chandra M

    2015-12-01

    The goal of this study was to devise a machine learning methodology as a viable low-cost alternative to a second reader to help augment physicians' interpretations of breast ultrasound images in differentiating benign and malignant masses. Two independent feature sets consisting of visual features based on a radiologist's interpretation of images and computer-extracted features when used as first and second readers and combined by adaptive boosting (AdaBoost) and a pruning classifier resulted in a very high level of diagnostic performance (area under the receiver operating characteristic curve = 0.98) at a cost of pruning a fraction (20%) of the cases for further evaluation by independent methods. AdaBoost also improved the diagnostic performance of the individual human observers and increased the agreement between their analyses. Pairing AdaBoost with selective pruning is a principled methodology for achieving high diagnostic performance without the added cost of an additional reader for differentiating solid breast masses by ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Advanced Neuroimaging in Traumatic Brain Injury

    PubMed Central

    Edlow, Brian L.; Wu, Ona

    2013-01-01

    Advances in structural and functional neuroimaging have occurred at a rapid pace over the past two decades. Novel techniques for measuring cerebral blood flow, metabolism, white matter connectivity, and neural network activation have great potential to improve the accuracy of diagnosis and prognosis for patients with traumatic brain injury (TBI), while also providing biomarkers to guide the development of new therapies. Several of these advanced imaging modalities are currently being implemented into clinical practice, whereas others require further development and validation. Ultimately, for advanced neuroimaging techniques to reach their full potential and improve clinical care for the many civilians and military personnel affected by TBI, it is critical for clinicians to understand the applications and methodological limitations of each technique. In this review, we examine recent advances in structural and functional neuroimaging and the potential applications of these techniques to the clinical care of patients with TBI. We also discuss pitfalls and confounders that should be considered when interpreting data from each technique. Finally, given the vast amounts of advanced imaging data that will soon be available to clinicians, we discuss strategies for optimizing data integration, visualization and interpretation. PMID:23361483

  19. X-ray imaging for security applications

    NASA Astrophysics Data System (ADS)

    Evans, J. Paul

    2004-01-01

    The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.

  20. Interpretative bias in spider phobia: Perception and information processing of ambiguous schematic stimuli.

    PubMed

    Haberkamp, Anke; Schmidt, Filipp

    2015-09-01

    This study investigates the interpretative bias in spider phobia with respect to rapid visuomotor processing. We compared perception, evaluation, and visuomotor processing of ambiguous schematic stimuli between spider-fearful and control participants. Stimuli were produced by gradually morphing schematic flowers into spiders. Participants rated these stimuli related to their perceptual appearance and to their feelings of valence, disgust, and arousal. Also, they responded to the same stimuli within a response priming paradigm that measures rapid motor activation. Spider-fearful individuals showed an interpretative bias (i.e., ambiguous stimuli were perceived as more similar to spiders) and rated spider-like stimuli as more unpleasant, disgusting, and arousing. However, we observed no differences between spider-fearful and control participants in priming effects for ambiguous stimuli. For non-ambiguous stimuli, we observed a similar enhancement for phobic pictures as has been reported previously for natural images. We discuss our findings with respect to the visual representation of morphed stimuli and to perceptual learning processes. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Visual Access in Interpreter-Mediated Learning Situations for Deaf and Hard-of-Hearing High School Students Where an Artifact Is in Use

    PubMed Central

    Thomassen, Gøril

    2016-01-01

    This article highlights interpreter-mediated learning situations for deaf high school students where such mediated artifacts as technical machines, models, and computer graphics are used by the teacher to illustrate his or her teaching. In these situations, the teacher’s situated gestures and utterances, and the artifacts will contribute independent pieces of information. However, the deaf student can only have his or her visual attention focused on one source at a time. The problem to be addressed is how the interpreter coordinates the mediation when it comes to deaf students’ visual orientation. The presented discourse analysis is based on authentic video recordings from inclusive learning situations in Norway. The theoretical framework consists of concepts of role, footing, and face-work (Goffman, E. (1959). The presentation of self in everyday life. London, UK: Penguin Books). The findings point out dialogical impediments to visual access in interpreter-mediated learning situations, and the article discusses the roles and responsibilities of teachers and educational interpreters. PMID:26681267

  2. The Role of 18F-FDG PET/CT Integrated Imaging in Distinguishing Malignant from Benign Pleural Effusion

    PubMed Central

    Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan

    2016-01-01

    Objective The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. Methods A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. Results One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). Conclusion 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively. PMID:27560933

  3. Visual Image Sensor Organ Replacement: Implementation

    NASA Technical Reports Server (NTRS)

    Maluf, A. David (Inventor)

    2011-01-01

    Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.

  4. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  5. Utility of three-dimensional and multiplanar reformatted computed tomography for evaluation of pediatric congenital spine abnormalities.

    PubMed

    Newton, Peter O; Hahn, Gregory W; Fricka, Kevin B; Wenger, Dennis R

    2002-04-15

    A retrospective radiographic review of 31 patients with congenital spine abnormalities who underwent conventional radiography and advanced imaging studies was conducted. To analyze the utility of three-dimensional computed tomography with multiplanar reformatted images for congenital spine anomalies, as compared with plain radiographs and axial two-dimensional computed tomography imaging. Conventional radiographic imaging for congenital spine disorders often are difficult to interpret because of the patient's small size, the complexity of the disorder, a deformity not in the plane of the radiographs, superimposed structures, and difficulty in forming a mental three-dimensional image. Multiplanar reformatted and three-dimensional computed tomographic imaging offers many potential advantages for defining congenital spine anomalies including visualization of the deformity in any plane, from any angle, with the overlying structures subtracted. The imaging studies of patients who had undergone a three-dimensional computed tomography for congenital deformities of the spine between 1992 and 1998 were reviewed (31 cases). All plain radiographs and axial two-dimensional computed tomography images performed before the three-dimensional computed tomography were reviewed and the findings documented. This was repeated for the three-dimensional reconstructions and, when available, the multiplanar reformatted images (15 cases). In each case, the utility of the advanced imaging was graded as one of the following: Grade A (substantial new information obtained), Grade B (confirmatory with improved visualization and understanding of the deformity), and Grade C (no added useful information obtained). In 17 of 31 cases, the multiplanar reformatted and three-dimensional images allowed identification of unrecognized malformations. In nine additional cases, the advanced imaging was helpful in better visualizing and understanding previously identified deformities. In five cases, no new information was gained. The standard and curved multiplanar reformatted images were best for defining the occiput-C1-C2 anatomy and the extent of segmentation defects. The curved multiplanar reformatted images were especially helpful in keeping the spine from "coming in" and "going out" of the plane of the image when there was significant spine deformity in the sagittal or coronal plane. The three-dimensional reconstructions proved valuable in defining failures of formation. Advanced computed tomography imaging (three-dimensional computed tomography and curved/standard multiplanar reformatted images) allows better definition of congenital spine anomalies. More than 50% of the cases showed additional abnormalities not appreciated on plain radiographs or axial two-dimensional computed tomography images. Curved multiplanar reformatted images allowed imaging in the coronal and sagittal planes of the entire deformity.

  6. Integrated biophotonics in endoscopic oncology

    NASA Astrophysics Data System (ADS)

    Muguruma, Naoki; DaCosta, Ralph S.; Wilson, Brian C.; Marcon, Norman E.

    2009-02-01

    Gastrointestinal endoscopy has made great progress during last decade. Diagnostic accuracy can be enhanced by better training, improved dye-contrast techniques method, and the development of new image processing technologies. However, diagnosis using conventional endoscopy with white-light optical imaging is essentially limited by being based on morphological changes and/or visual attribution: hue, saturation and intensity, interpretation of which depends on the endoscopist's eye and brain. In microlesions in the gastrointestinal tract, we still rely ultimately on the histopathological diagnosis from biopsy specimens. Autofluorescence imaging system has been applied for lesions which have been difficult to morphologically recognize or are indistinct with conventional endoscope, and this approach has potential application for the diagnosis of dysplastic lesions and early cancers in the gastrointestinal tract, supplementing the information from white light endoscopy. This system has an advantage that it needs no administration of a photosensitive agent, making it suitable as a screening method for the early detection of neoplastic tissues. Narrow band imaging (NBI) is a novel endoscopic technique which can distinguish neoplastic and non-neoplastic lesions without chromoendoscopy. Magnifying endoscopy in combination with NBI has an obvious advantage, namely analysis of the epithelial pit pattern and the vascular network. This new technique allows a detailed visualization in early neoplastic lesions of esophagus, stomach and colon. However, problems remain; how to combine these technologies in an optimum diagnostic strategy, how to apply them into the algorithm for therapeutic decision-making, and how to standardize several classifications surrounding them. 'Molecular imaging' is a concept representing the most novel imaging methods in medicine, although the definition of the word is still controversial. In the field of gastrointestinal endoscopy, the future of endoscopic diagnosis is likely to be impacted by a combination of biomarkers and technology, and 'endoscopic molecular imaging' should be defined as "visualization of molecular characteristics with endoscopy". These innovations will allow us not only to locate a tumor or dysplastic lesion but also to visualize its molecular characteristics (e.g., DNA mutations and polymorphisms, gene and/or protein expression), and the activity of specific molecules and biological processes that affect tumor behavior and/or its response to therapy. In the near future, these methods should be promising technologies that will play a central role in gastrointestinal oncology.

  7. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  8. Clinical impact of migraine for the management of glaucoma patients.

    PubMed

    Nguyen, Bao N; Lek, Jia Jia; Vingrys, Algis J; McKendrick, Allison M

    2016-03-01

    Migraine is a common and debilitating primary headache disorder that affects 10-15% of the general population, particularly people of working age. Migraine is relevant to providers of clinical eye-care because migraine attacks are associated with a range of visual sensory symptoms, and because of growing evidence that the results of standard tests of visual function necessary for the diagnosis and monitoring of glaucoma (visual fields, electrophysiology, ocular imaging) can be abnormal due to migraine. These abnormalities are measureable in-between migraine events (the interictal period), despite patients being asymptomatic and otherwise healthy. This picture is further complicated by epidemiological data that suggests an increased prevalence of migraine in patients with glaucoma, particularly in patients with normal tension glaucoma. We discuss how migraine, as a co-morbidity, can confound the results and interpretation of clinical tests that form part of contemporary glaucoma evaluation, and provide practical evidence-based recommendations for the clinical testing and management of patients with migraine who attend eye-care settings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  10. Phenotype analysis of early risk factors from electronic medical records improves image-derived diagnostic classifiers for optic nerve pathology

    NASA Astrophysics Data System (ADS)

    Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina M.; Mawn, Louise A.; Landman, Bennett A.

    2017-03-01

    We examine imaging and electronic medical records (EMR) of 588 subjects over five major disease groups that affect optic nerve function. An objective evaluation of the role of imaging and EMR data in diagnosis of these conditions would improve understanding of these diseases and help in early intervention. We developed an automated image processing pipeline that identifies the orbital structures within the human eyes from computed tomography (CT) scans, calculates structural size, and performs volume measurements. We customized the EMR-based phenome-wide association study (PheWAS) to derive diagnostic EMR phenotypes that occur at least two years prior to the onset of the conditions of interest from a separate cohort of 28,411 ophthalmology patients. We used random forest classifiers to evaluate the predictive power of image-derived markers, EMR phenotypes, and clinical visual assessments in identifying disease cohorts from a control group of 763 patients without optic nerve disease. Image-derived markers showed more predictive power than clinical visual assessments or EMR phenotypes. However, the addition of EMR phenotypes to the imaging markers improves the classification accuracy against controls: the AUC improved from 0.67 to 0.88 for glaucoma, 0.73 to 0.78 for intrinsic optic nerve disease, 0.72 to 0.76 for optic nerve edema, 0.72 to 0.77 for orbital inflammation, and 0.81 to 0.85 for thyroid eye disease. This study illustrates the importance of diagnostic context for interpretation of image-derived markers and the proposed PheWAS technique provides a flexible approach for learning salient features of patient history and incorporating these data into traditional machine learning analyses.

  11. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  12. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    PubMed

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.

  13. GPR image analysis to locate water leaks from buried pipes by applying variance filters

    NASA Astrophysics Data System (ADS)

    Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín

    2018-05-01

    Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.

  14. Visual perception and stereoscopic imaging: an artist's perspective

    NASA Astrophysics Data System (ADS)

    Mason, Steve

    2015-03-01

    This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own visual process have been startled by the effect they have on how they perceive the world around them. For instance, when viewing the mountains on a trip to Montana, one woman exclaimed, "I could no longer see just mountains, but also so many amazing colors and shapes"―she could see beyond her preconceptions of mountains to realize more of the beauty that was really there, not just the objects she "thought" to be there. The awareness gained from experiencing the artist's perspective will help with creative thinking in particular and overall research in general. Perceiving the space in these works, completely removing the picture-plane by use of the 3D glasses, making a conscious connection between the feeling and visual content, and thus gaining a deeper appreciation of the visual process will all contribute to understanding how our thinking, our left-brain domination, gets in the way of our seeing what is right in front of us. We fool ourselves with concept and memory―experiencing these prints may help some come a little closer to reality.

  15. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Brugel, Edward W.; Domik, Gitta O.; Ayres, Thomas R.

    1993-01-01

    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists.

  16. Radiomics: Images Are More than Pictures, They Are Data

    PubMed Central

    Kinahan, Paul E.; Hricak, Hedvig

    2016-01-01

    In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer. PMID:26579733

  17. Optimizing morphology through blood cell image analysis.

    PubMed

    Merino, A; Puigví, L; Boldú, L; Alférez, S; Rodellar, J

    2018-05-01

    Morphological review of the peripheral blood smear is still a crucial diagnostic aid as it provides relevant information related to the diagnosis and is important for selection of additional techniques. Nevertheless, the distinctive cytological characteristics of the blood cells are subjective and influenced by the reviewer's interpretation and, because of that, translating subjective morphological examination into objective parameters is a challenge. The use of digital microscopy systems has been extended in the clinical laboratories. As automatic analyzers have some limitations for abnormal or neoplastic cell detection, it is interesting to identify quantitative features through digital image analysis for morphological characteristics of different cells. Three main classes of features are used as follows: geometric, color, and texture. Geometric parameters (nucleus/cytoplasmic ratio, cellular area, nucleus perimeter, cytoplasmic profile, RBC proximity, and others) are familiar to pathologists, as they are related to the visual cell patterns. Different color spaces can be used to investigate the rich amount of information that color may offer to describe abnormal lymphoid or blast cells. Texture is related to spatial patterns of color or intensities, which can be visually detected and quantitatively represented using statistical tools. This study reviews current and new quantitative features, which can contribute to optimize morphology through blood cell digital image processing techniques. © 2018 John Wiley & Sons Ltd.

  18. Play dough as an educational tool for visualization of complicated cerebral aneurysm anatomy

    PubMed Central

    Eftekhar, Behzad; Ghodsi, Mohammad; Ketabchi, Ebrahim; Ghazvini, Arman Rakan

    2005-01-01

    Background Imagination of the three-dimensional (3D) structure of cerebral vascular lesions using two-dimensional (2D) angiograms is one of the skills that neurosurgical residents should achieve during their training. Although ongoing progress in computer software and digital imaging systems has facilitated viewing and interpretation of cerebral angiograms enormously, these facilities are not always available. Methods We have presented the use of play dough as an adjunct to the teaching armamentarium for training in visualization of cerebral aneurysms in some cases. Results The advantages of play dough are low cost, availability and simplicity of use, being more efficient and realistic in training the less experienced resident in comparison with the simple drawings and even angiographic views from different angles without the need for computers and similar equipment. The disadvantages include the psychological resistance of residents to the use of something in surgical training that usually is considered to be a toy, and not being as clean as drawings or computerized images. Conclusion Although technology and computerized software using the patients' own imaging data seems likely to become more advanced in the future, use of play dough in some complicated cerebral aneurysm cases may be helpful in 3D reconstruction of the real situation. PMID:15885141

  19. Interpretation of HCMM images: A regional study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Potential users of HCMM data, especially those with only a cursory background in thermal remote sensing are familiarized with the kinds of information contained in the images that can be extracted with some reliability solely from inspection of such standard products as those generated at NASA/GSFC and now achieved in the National Space Science Data Center. Visual analysis of photoimagery is prone to various misimpressions and outright errors brought on by unawareness of the influence of physical factors as well as by sometimes misleading tonal patterns introduced during photoprocessing. The quantitative approach, which relies on computer processing of digital HCMM data, field measurements, and integration of rigorous mathematical models, can usually be used to identify, compensate for, or correct the contributions from at least some of the natural factors and those associated with photoprocessing. Color composite, day-IR, night-IR and visible images of California and Nevada are examined.

  20. An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.

Top