An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
A proposed intracortical visual prosthesis image processing system.
Srivastava, N R; Troyk, P
2005-01-01
It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.
Visual information mining in remote sensing image archives
NASA Astrophysics Data System (ADS)
Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.
2002-01-01
The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Space shuttle visual simulation system design study
NASA Technical Reports Server (NTRS)
1973-01-01
The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed.
Bio-inspired display of polarization information using selected visual cues
NASA Astrophysics Data System (ADS)
Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader
2003-12-01
For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
Phototaxis and the origin of visual eyes
Randel, Nadine
2016-01-01
Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725
A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners
Greco, V.; Frijia, F.; Mikellidou, K.; Montanaro, D.; Farini, A.; D’Uva, M.; Poggi, P.; Pucci, M.; Sordini, A.; Morrone, M. C.; Burr, D. C.
2016-01-01
We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80°). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields. PMID:26092392
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)
2014-01-01
A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William
2009-02-01
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.
A GUI visualization system for airborne lidar image data to reconstruct 3D city model
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2015-10-01
A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.
Visual Communications and Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell
1987-07-01
This special issue of Optical Engineering is concerned with visual communications and image processing. The increase in communication of visual information over the past several decades has resulted in many new image processing and visual communication systems being put into service. The growth of this field has been rapid in both commercial and military applications. The objective of this special issue is to intermix advent technology in visual communications and image processing with ideas generated from industry, universities, and users through both invited and contributed papers. The 15 papers of this issue are organized into four different categories: image compression and transmission, image enhancement, image analysis and pattern recognition, and image processing in medical applications.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
Background Oriented Schlieren Using Celestial Objects
NASA Technical Reports Server (NTRS)
Haering, Edward, A., Jr. (Inventor); Hill, Michael A (Inventor)
2017-01-01
The present invention is a system and method of visualizing fluid flow around an object, such as an aircraft or wind turbine, by aligning the object between an imaging system and a celestial object having a speckled background, taking images, and comparing those images to obtain fluid flow visualization.
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
Imaging anatomy of the vestibular and visual systems.
Gunny, Roxana; Yousry, Tarek A
2007-02-01
This review will outline the imaging anatomy of the vestibular and visual pathways, using computed tomography and magnetic resonance imaging, with emphasis on the more recent developments in neuroimaging. Technical advances in computed tomography and magnetic resonance imaging, such as the advent of multislice computed tomography and newer magnetic resonance imaging techniques such as T2-weighted magnetic resonance cisternography, have improved the imaging of the vestibular and visual pathways, allowing better visualization of the end organs and peripheral nerves. Higher field strength magnetic resonance imaging is a promising tool, which has been used to evaluate and resolve fine anatomic detail in vitro, as in the labyrinth. Advanced magnetic resonance imaging techniques such as functional magnetic resonance imaging and diffusion tractography have been used to identify cortical areas of activation and associated white matter pathways, and show potential for the future identification of complex neuronal relays involved in integrating these pathways. The assessment of the various components of the vestibular and the visual systems has improved with more detailed research on the imaging anatomy of these systems, the advent of high field magnetic resonance scanners and multislice computerized tomography, and the wider use of specific techniques such as tractography which displays white matter tracts not directly accessible until now.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
ARIES: Enabling Visual Exploration and Organization of Art Image Collections.
Crissaff, Lhaylla; Wood Ruby, Louisa; Deutch, Samantha; DuBois, R Luke; Fekete, Jean-Daniel; Freire, Juliana; Silva, Claudio
2018-01-01
Art historians have traditionally used physical light boxes to prepare exhibits or curate collections. On a light box, they can place slides or printed images, move the images around at will, group them as desired, and visual-ly compare them. The transition to digital images has rendered this workflow obsolete. Now, art historians lack well-designed, unified interactive software tools that effectively support the operations they perform with physi-cal light boxes. To address this problem, we designed ARIES (ARt Image Exploration Space), an interactive image manipulation system that enables the exploration and organization of fine digital art. The system allows images to be compared in multiple ways, offering dynamic overlays analogous to a physical light box, and sup-porting advanced image comparisons and feature-matching functions, available through computational image processing. We demonstrate the effectiveness of our system to support art historians tasks through real use cases.
Evaluation of visual acuity with Gen 3 night vision goggles
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Kaiser, Mary K.
1994-01-01
Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.
BIM-Sim: Interactive Simulation of Broadband Imaging Using Mie Theory
NASA Astrophysics Data System (ADS)
Berisha, Sebastian; van Dijk, Thomas; Bhargava, Rohit; Carney, P. Scott; Mayerich, David
2017-02-01
Understanding the structure of a scattered electromagnetic (EM) field is critical to improving the imaging process. Mechanisms such as diffraction, scattering, and interference affect an image, limiting the resolution and potentially introducing artifacts. Simulation and visualization of scattered fields thus plays an important role in imaging science. However, the calculation of scattered fields is extremely time-consuming on desktop systems and computationally challenging on task-parallel systems such as supercomputers and cluster systems. In addition, EM fields are high-dimensional, making them difficult to visualize. In this paper, we present a framework for interactively computing and visualizing EM fields scattered by micro and nano-particles. Our software uses graphics hardware for evaluating the field both inside and outside of these particles. We then use Monte-Carlo sampling to reconstruct and visualize the three-dimensional structure of the field, spectral profiles at individual points, the structure of the field at the surface of the particle, and the resulting image produced by an optical system.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
NASA Astrophysics Data System (ADS)
Irisawa, Kaku; Murakoshi, Dai; Hashimoto, Atsushi; Yamamoto, Katsuya; Hayakawa, Toshiro
2017-03-01
Visualization of the tip of medical devices like needles or catheters under ultrasound imaging has been a continuous topic since the early 1980's. In this study, a needle tip visualization system utilizing photoacoustic effects is proposed. In order to visualize the needle tip, an optical fiber was inserted into a needle. The optical fiber tip is placed on the needle bevel and affixed with black glue. The pulsed laser light from laser diode was transferred to the optical fiber and converted to ultrasound due to laser light absorption of the black glue and the subsequent photoacoustic effect. The ultrasound is detected by transducer array and reconstructed into photoacoustic images in the ultrasound unit. The photoacoustic image is displayed with a superposed ultrasound B-mode image. As a system evaluation, the needle is punctured into bovine meat and the needle tip is observed with commercialized conventional linear transducers or convex transducers. The needle tip is visualized clearly at 7 and 12 cm depths with linear and convex probes, respectively, even with a steep needle puncture angle of around 90 degrees. Laser and acoustic outputs, and thermal rise at the needle tip, were measured and were well below the limits of the safety standards. Compared with existing needle tip visualization technologies, the photoacoustic needle tip visualization system has potential distinguishable features for clinical procedures related with needle puncture and injection.
[Image processing system of visual prostheses based on digital signal processor DM642].
Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu
2011-09-01
This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.
NASA Astrophysics Data System (ADS)
Sanghavi, Foram; Agaian, Sos
2017-05-01
The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.
Cognitive issues in searching images with visual queries
NASA Astrophysics Data System (ADS)
Yu, ByungGu; Evens, Martha W.
1999-01-01
In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
Extraction of skin-friction fields from surface flow visualizations as an inverse problem
NASA Astrophysics Data System (ADS)
Liu, Tianshu
2013-12-01
Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
The medium and the message: a revisionist view of image quality
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2010-02-01
In his book "Understanding Media" social theorist Marshall McLuhan declared: "The medium is the message." The thesis of this paper is that with respect to image quality, imaging system developers have taken McLuhan's dictum too much to heart. Efforts focus on improving the technical specifications of the media (e.g. dynamic range, color gamut, resolution, temporal response) with little regard for the visual messages the media will be used to communicate. We present a series of psychophysical studies that investigate the visual system's ability to "see through" the limitations of imaging media to perceive the messages (object and scene properties) the images represent. The purpose of these studies is to understand the relationships between the signal characteristics of an image and the fidelity of the visual information the image conveys. The results of these studies provide a new perspective on image quality that shows that images that may be very different in "quality", can be visually equivalent as realistic representations of objects and scenes.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Fixational Eye Movements in the Earliest Stage of Metazoan Evolution
Bielecki, Jan; Høeg, Jens T.; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673
Fixational eye movements in the earliest stage of metazoan evolution.
Bielecki, Jan; Høeg, Jens T; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Photoacoustic imaging of lymphatic pumping
NASA Astrophysics Data System (ADS)
Forbrich, Alex; Heinmiller, Andrew; Zemp, Roger J.
2017-10-01
The lymphatic system is responsible for fluid homeostasis and immune cell trafficking and has been implicated in several diseases, including obesity, diabetes, and cancer metastasis. Despite its importance, the lack of suitable in vivo imaging techniques has hampered our understanding of the lymphatic system. This is, in part, due to the limited contrast of lymphatic fluids and structures. Photoacoustic imaging, in combination with optically absorbing dyes or nanoparticles, has great potential for noninvasively visualizing the lymphatic vessels deep in tissues. Multispectral photoacoustic imaging is capable of separating the components; however, the slow wavelength switching speed of most laser systems is inadequate for imaging lymphatic pumping without motion artifacts being introduced into the processed images. We investigate two approaches for visualizing lymphatic processes in vivo. First, single-wavelength differential photoacoustic imaging is used to visualize lymphatic pumping in the hindlimb of a mouse in real time. Second, a fast-switching multiwavelength photoacoustic imaging system was used to assess the propulsion profile of dyes through the lymphatics in real time. These approaches may have profound impacts in noninvasively characterizing and investigating the lymphatic system.
2013-09-01
existing MR scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently...and unique system for analysis of affected brain regions and coupled with other imaging techniques and molecular measurements holds significant...scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently stain
Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study
Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.
2010-01-01
Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597
Visible digital watermarking system using perceptual models
NASA Astrophysics Data System (ADS)
Cheng, Qiang; Huang, Thomas S.
2001-03-01
This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
The implementation of thermal image visualization by HDL based on pseudo-color
NASA Astrophysics Data System (ADS)
Zhu, Yong; Zhang, JiangLing
2004-11-01
The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Image pattern recognition supporting interactive analysis and graphical visualization
NASA Technical Reports Server (NTRS)
Coggins, James M.
1992-01-01
Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
Measuring the performance of visual to auditory information conversion.
Tan, Shern Shiou; Maul, Tomás Henrique Bode; Mennie, Neil Russell
2013-01-01
Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID) and inter sound distance (ISD) whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.
Fink, Wolfgang; You, Cindy X; Tarbell, Mark A
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Measuring and Predicting Tag Importance for Image Retrieval.
Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay
2017-12-01
Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Signal amplification of FISH for automated detection using image cytometry.
Truong, K; Boenders, J; Maciorowski, Z; Vielh, P; Dutrillaux, B; Malfoy, B; Bourgeois, C A
1997-05-01
The purpose of this study was to improve the detection of FISH signals, in order that spot counting by a fully automated image cytometer be comparable to that obtained visually under the microscope. Two systems of spot scoring, visual and automated counting, were investigated in parallel on stimulated human lymphocytes with FISH using a biotinylated centromeric probe for chromosome 3. Signal characteristics were first analyzed on images recorded with a coupled charge device (CCD) camera. Number of spots per nucleus were scored visually on these recorded images versus automatically with a DISCOVERY image analyzer. Several fluochromes, amplification and pretreatments were tested. Our results for both visual and automated scoring show that the tyramide amplification system (TSA) gives the best amplification of signal if pepsin treatment is applied prior to FISH. Accuracy of the automated scoring, however, remained low (58% of nuclei containing two spots) compared to the visual scoring because of the high intranuclear variation between FISH spots.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Image gathering and restoration - Information and visual quality
NASA Technical Reports Server (NTRS)
Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.
1989-01-01
A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.
Visual perception system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)
2012-01-01
A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.
Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
2016-01-01
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Conceptual design study for an advanced cab and visual system, volume 1
NASA Technical Reports Server (NTRS)
Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.
1980-01-01
A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Ebrahimi, Touradj
2014-03-01
Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Perceptually lossless fractal image compression
NASA Astrophysics Data System (ADS)
Lin, Huawu; Venetsanopoulos, Anastasios N.
1996-02-01
According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Handa, T; Ishikawa, H; Shimizu, K; Kawamura, R; Nakayama, H; Sawada, K
2009-11-01
Virtual reality has recently been highlighted as a promising medium for visual presentation and entertainment. A novel apparatus for testing binocular visual function using a hemispherical visual display system, 'CyberDome', has been developed and tested. Subjects comprised 40 volunteers (mean age, 21.63 years) with corrected visual acuity of -0.08 (LogMAR) or better, and stereoacuity better than 100 s of arc on the Titmus stereo test. Subjects were able to experience visual perception like being surrounded by visual images, a feature of the 'CyberDome' hemispherical visual display system. Visual images to the right and left eyes were projected and superimposed on the dome screen, allowing test images to be seen independently by each eye using polarizing glasses. The hemispherical visual display was 1.4 m in diameter. Three test parameters were evaluated: simultaneous perception (subjective angle of strabismus), motor fusion amplitude (convergence and divergence), and stereopsis (binocular disparity at 1260, 840, and 420 s of arc). Testing was performed in volunteer subjects with normal binocular vision, and results were compared with those using a major amblyoscope. Subjective angle of strabismus and motor fusion amplitude showed a significant correlation between our test and the major amblyoscope. All subjects could perceive the stereoscopic target with a binocular disparity of 480 s of arc. Our novel apparatus using the CyberDome, a hemispherical visual display system, was able to quantitatively evaluate binocular function. This apparatus offers clinical promise in the evaluation of binocular function.
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
Visualizing Chemistry with Infrared Imaging
ERIC Educational Resources Information Center
Xie, Charles
2011-01-01
Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…
Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui
2014-07-11
Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.
European dental students' opinions about visual and digital tooth colour determination systems.
Dozic, Alma; Kharbanda, Aron K; Kamell, Hassib; Brand, Henk S
2011-12-01
The aim of the study was to investigate students' opinion about visual and digital tooth colour determination education at different European dental schools. A cross-sectional web-based survey was created, containing nine dichotomous, multiple choice and 5-point Likert scale questions. The questionnaire was distributed amongst students of 40 European dental schools. Seven hundred and ninety-nine completed questionnaires from students of 15 dental schools were analysed statistically. Vitapan Classical and Vitapan 3D-Master are the most frequently used visual determination systems at European dental schools. Most students responded with "neutral" regarding whether they find it easy to identify the colour of teeth with a visual determination system (range 2.8-3.6). A minority of the dental students had received education in digital imaging systems (2-47%). The Easyshade was the most frequently mentioned digital system. The majority of the students who did not receive education on digital systems would like to see this topic added to the curriculum (77-100%). The dental students who had worked with both methods found it significantly easier to determine tooth colour with a digital system than with a visual system (mean score 3.5 ± 0.8 vs. 3.0 ± 0.8). Tooth colour determination programmes show a considerable variation across European dental schools. Based upon the outcomes of this study, students prefer digital imaging systems over visual systems, and like to have (more) education about digital tooth colour imaging. Copyright © 2011 Elsevier Ltd. All rights reserved.
2016-06-01
theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images
COMPARISON OF RETINAL PATHOLOGY VISUALIZATION IN MULTISPECTRAL SCANNING LASER IMAGING.
Meshi, Amit; Lin, Tiezhu; Dans, Kunny; Chen, Kevin C; Amador, Manuel; Hasenstab, Kyle; Muftuoglu, Ilkay Kilic; Nudleman, Eric; Chao, Daniel; Bartsch, Dirk-Uwe; Freeman, William R
2018-03-16
To compare retinal pathology visualization in multispectral scanning laser ophthalmoscope imaging between the Spectralis and Optos devices. This retrospective cross-sectional study included 42 eyes from 30 patients with age-related macular degeneration (19 eyes), diabetic retinopathy (10 eyes), and epiretinal membrane (13 eyes). All patients underwent retinal imaging with a color fundus camera (broad-spectrum white light), the Spectralis HRA-2 system (3-color monochromatic lasers), and the Optos P200 system (2-color monochromatic lasers). The Optos image was cropped to a similar size as the Spectralis image. Seven masked graders marked retinal pathologies in each image within a 5 × 5 grid that included the macula. The average area with detected retinal pathology in all eyes was larger in the Spectralis images compared with Optos images (32.4% larger, P < 0.0001), mainly because of better visualization of epiretinal membrane and retinal hemorrhage. The average detection rate of age-related macular degeneration and diabetic retinopathy pathologies was similar across the three modalities, whereas epiretinal membrane detection rate was significantly higher in the Spectralis images. Spectralis tricolor multispectral scanning laser ophthalmoscope imaging had higher rate of pathology detection primarily because of better epiretinal membrane and retinal hemorrhage visualization compared with Optos bicolor multispectral scanning laser ophthalmoscope imaging.
Kawai, Nobuyuki; He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.
He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686
Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
Total body photography for skin cancer screening.
Dengel, Lynn T; Petroni, Gina R; Judge, Joshua; Chen, David; Acton, Scott T; Schroen, Anneke T; Slingluff, Craig L
2015-11-01
Total body photography may aid in melanoma screening but is not widely applied due to time and cost. We hypothesized that a near-simultaneous automated skin photo-acquisition system would be acceptable to patients and could rapidly obtain total body images that enable visualization of pigmented skin lesions. From February to May 2009, a study of 20 volunteers was performed at the University of Virginia to test a prototype 16-camera imaging booth built by the research team and to guide development of special purpose software. For each participant, images were obtained before and after marking 10 lesions (five "easy" and five "difficult"), and images were evaluated to estimate visualization rates. Imaging logistical challenges were scored by the operator, and participant opinion was assessed by questionnaire. Average time for image capture was three minutes (range 2-5). All 55 "easy" lesions were visualized (sensitivity 100%, 90% CI 95-100%), and 54/55 "difficult" lesions were visualized (sensitivity 98%, 90% CI 92-100%). Operators and patients graded the imaging process favorably, with challenges identified regarding lighting and positioning. Rapid-acquisition automated skin photography is feasible with a low-cost system, with excellent lesion visualization and participant acceptance. These data provide a basis for employing this method in clinical melanoma screening. © 2014 The International Society of Dermatology.
Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems
1991-09-01
reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Digital diagnosis of medical images
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu
2001-08-01
The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Cha, Jaepyeong; Broch, Aline; Mudge, Scott; Kim, Kihoon; Namgoong, Jung-Man; Oh, Eugene; Kim, Peter
2018-01-01
Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging. PMID:29541506
CLFs-based optimization control for a class of constrained visual servoing systems.
Song, Xiulan; Miaomiao, Fu
2017-03-01
In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borovetz, H.S.; Shaffer, F.; Schaub, R.
This paper discusses a series of experiments to visualize and measure flow fields in the Novacor left ventricular assist system (LVAS). The experiments utilize a multiple exposure, optical imaging technique called fluorescent image tracking velocimetry (FITV) to hack the motion of small, neutrally-buoyant particles in a flowing fluid.
Occam's razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2005-01-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Occam"s razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2004-12-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a "narrative grammar" that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics.
MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Supervised pixel classification using a feature space derived from an artificial visual system
NASA Technical Reports Server (NTRS)
Baxter, Lisa C.; Coggins, James M.
1991-01-01
Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.
Parallel and Serial Grouping of Image Elements in Visual Perception
ERIC Educational Resources Information Center
Houtkamp, Roos; Roelfsema, Pieter R.
2010-01-01
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Display Device Color Management and Visual Surveillance of Vehicles
ERIC Educational Resources Information Center
Srivastava, Satyam
2011-01-01
Digital imaging has seen an enormous growth in the last decade. Today users have numerous choices in creating, accessing, and viewing digital image/video content. Color management is important to ensure consistent visual experience across imaging systems. This is typically achieved using color profiles. In this thesis we identify the limitations…
Penn State's Visual Image User Study
ERIC Educational Resources Information Center
Pisciotta, Henry A.; Dooris, Michael J.; Frost, James; Halm, Michael
2005-01-01
The Visual Image User Study (VIUS), an extensive needs assessment project at Penn State University, describes academic users of pictures and their perceptions. These findings outline the potential market for digital images and list the likely determinates of whether or not a system will be used. They also explain some key user requirements for…
How the blind "see" Braille: lessons from functional magnetic resonance imaging.
Sadato, Norihiro
2005-12-01
What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.
DVV: a taxonomy for mixed reality visualization in image guided surgery.
Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis
2012-02-01
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Indirect gonioscopy system for imaging iridocorneal angle of eye
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep M.; Fu, Chan Yiu; Baskaran, Mani; Aung, Tin; Murukeshan, V. M.
2017-08-01
Current clinical optical imaging systems do not provide sufficient structural information of trabecular meshwork (TM) in the iridocorneal angle (ICA) of the eye due to their low resolution. Increase in the intraocular pressure (IOP) can occur due to the abnormalities in TM, which could subsequently lead to glaucoma. Here, we present an indirect gonioscopy based imaging probe with significantly improved visualization of structures in the ICA including TM region, compared to the currently available tools. Imaging quality of the developed system was tested in porcine samples. Improved direct high quality visualization of the TM region through this system can be used for Laser trabeculoplasty, which is a primary treatment of glaucoma. This system is expected to be used complementary to angle photography and gonioscopy.
21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Image-intensified fluoroscopic x-ray system. 892... fluoroscopic x-ray system. (a) Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image...
21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Image-intensified fluoroscopic x-ray system. 892... fluoroscopic x-ray system. (a) Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image...
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Wang, Xingwei; Chen, Xiaodong; Li, Yuhua; Liu, Hong; Li, Shibo; Zheng, Bin
2010-02-01
Visually searching for analyzable metaphase chromosome cells under microscopes is quite time-consuming and difficult. To improve detection efficiency, consistency, and diagnostic accuracy, an automated microscopic image scanning system was developed and tested to directly acquire digital images with sufficient spatial resolution for clinical diagnosis. A computer-aided detection (CAD) scheme was also developed and integrated into the image scanning system to search for and detect the regions of interest (ROI) that contain analyzable metaphase chromosome cells in the large volume of scanned images acquired from one specimen. Thus, the cytogeneticists only need to observe and interpret the limited number of ROIs. In this study, the high-resolution microscopic image scanning and CAD performance was investigated and evaluated using nine sets of images scanned from either bone marrow (three) or blood (six) specimens for diagnosis of leukemia. The automated CAD-selection results were compared with the visual selection. In the experiment, the cytogeneticists first visually searched for the analyzable metaphase chromosome cells from specimens under microscopes. The specimens were also automated scanned and followed by applying the CAD scheme to detect and save ROIs containing analyzable cells while deleting the others. The automated selected ROIs were then examined by a panel of three cytogeneticists. From the scanned images, CAD selected more analyzable cells than initially visual examinations of the cytogeneticists in both blood and bone marrow specimens. In general, CAD had higher performance in analyzing blood specimens. Even in three bone marrow specimens, CAD selected 50, 22, 9 ROIs, respectively. Except matching with the initially visual selection of 9, 7, and 5 analyzable cells in these three specimens, the cytogeneticists also selected 41, 15 and 4 new analyzable cells, which were missed in initially visual searching. This experiment showed the feasibility of applying this CAD-guided high-resolution microscopic image scanning system to prescreen and select ROIs that may contain analyzable metaphase chromosome cells. The success and the further improvement of this automated scanning system may have great impact on the future clinical practice in genetic laboratories to detect and diagnose diseases.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Visual Image Sensor Organ Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.
2014-01-01
This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
Stereoscopic augmented reality for laparoscopic surgery.
Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj
2014-07-01
Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.
Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo
2014-12-15
Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.
Helmet-mounted displays in long-range-target visual acquisition
NASA Astrophysics Data System (ADS)
Wilkins, Donald F.
1999-07-01
Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.
Optical images of visible and invisible percepts in the primary visual cortex of primates
Macknik, Stephen L.; Haglund, Michael M.
1999-01-01
We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363
Learning receptor positions from imperfectly known motions
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1990-01-01
An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.
Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M
2003-03-01
A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a “narrative grammar” that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics. PMID:25071651
Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R
2015-10-01
This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.
[Spatial domain display for interference image dataset].
Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia
2011-11-01
The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.
Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-01-01
Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398
Instrumentation in molecular imaging.
Wells, R Glenn
2016-12-01
In vivo molecular imaging is a challenging task and no single type of imaging system provides an ideal solution. Nuclear medicine techniques like SPECT and PET provide excellent sensitivity but have poor spatial resolution. Optical imaging has excellent sensitivity and spatial resolution, but light photons interact strongly with tissues and so only small animals and targets near the surface can be accurately visualized. CT and MRI have exquisite spatial resolution, but greatly reduced sensitivity. To overcome the limitations of individual modalities, molecular imaging systems often combine individual cameras together, for example, merging nuclear medicine cameras with CT or MRI to allow the visualization of molecular processes with both high sensitivity and high spatial resolution.
NASA Astrophysics Data System (ADS)
Pahlevaninezhad, Hamid; Lee, Anthony; Hohert, Geoffrey; Schwartz, Carley; Shaipanich, Tawimas; Ritchie, Alexander J.; Zhang, Wei; MacAulay, Calum E.; Lam, Stephen; Lane, Pierre M.
2016-03-01
In this work, we present multimodal imaging of peripheral airways in vivo using an endoscopic imaging system capable of co-registered optical coherence tomography and autofluorescence imaging (OCT-AFI). This system employs a 0.9 mm diameter double-clad fiber optic-based catheter for endoscopic imaging of small peripheral airways. Optical coherence tomography (OCT) can visualize detailed airway morphology in the lung periphery and autofluorescence imaging (AFI) can visualize fluorescent tissue components such as collagen and elastin, improving the detection of airway lesions. Results from in vivo imaging of 40 patients indicate that OCT and AFI offer complementary information that may increase the ability to identify pulmonary nodules in the lung periphery and improve the safety of biopsy collection by identifying large blood vessels. AFI can rapidly visualize in vivo vascular networks using fast scanning parameters resulting in vascular-sensitive imaging with less breathing/cardiac motion artifacts compared to Doppler OCT imaging. By providing complementary information about structure and function of tissue, OCT-AFI may improve site selection during biopsy collection in the lung periphery.
1998-01-01
consisted of a videomicroscopy system and a tactile stimulator system. By using this setup, real-time images from the contact region as wvell as the... Videomicroscopy system . 4.3.2 Tactile stimulator svsteln . 4.3.3 Real-time imaging setup. 4.3.4 Active and passive touch experiments. 4.3.5...contact process is an important step. In this study, therefore, a videomicroscopy system was built’to visualize the contact re- gion of the fingerpad
Validating tyrosinase homologue MelA as a photoacoustic reporter gene for imaging Escherichia coli
NASA Astrophysics Data System (ADS)
Paproski, Robert J.; Li, Yan; Barber, Quinn; Lewis, John D.; Campbell, Robert; Zemp, Roger
2015-03-01
Antibiotic drug resistance is a major worldwide issue. Development of new therapies against pathogenic bacteria requires appropriate research tools for replicating and characterizing infections. Previously fluorescence and bioluminescence modalities have been used to image infectious burden in animal models but scattering significantly limits imaging depth and resolution. We hypothesize that photoacoustic imaging, which has improved depth-toresolution ratio, could be useful for visualizing MelA-expressing bacteria since MelA is a bacterial tyrosinase homologue involved in melanin production. Using an inducible expression system, E. coli expressing MelA were visibly black in liquid culture. Phosphate buffered saline (PBS), MelA-expressing bacteria (at different dilutions in PBS), and chicken embryo blood were injected in plastic tubes which were imaged using a VisualSonics Vevo LAZR system. Photoacoustic imaging at 6 different wavelengths (680, 700, 750, 800, 850 and 900nm) enabled spectral de-mixing to distinguish melanin signals from blood. The signal to noise ratio of 9x diluted MelA bacteria was 55, suggesting that ~20 bacteria cells could be detected with our system. When MelA bacteria were injected as a 100 μL bolus into a chicken embryo, photoacoustic signals from deoxy- and oxy- hemoglobin as well as MelA-expressing bacteria could be separated and overlaid on an ultrasound image, allowing visualization of the bacterial location. Photoacoustic imaging may be a useful tool for visualizing bacterial infections and further work incorporating photoacoustic reporters into infectious bacterial strains is warranted.
NASA Astrophysics Data System (ADS)
Newman, R. L.
2002-12-01
How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.
Comparison of two laboratory-based systems for evaluation of halos in intraocular lenses
Alexander, Elsinore; Wei, Xin; Lee, Shinwook
2018-01-01
Purpose Multifocal intraocular lenses (IOLs) can be associated with unwanted visual phenomena, including halos. Predicting potential for halos is desirable when designing new multifocal IOLs. Halo images from 6 IOL models were compared using the Optikos modulation transfer function bench system and a new high dynamic range (HDR) system. Materials and methods One monofocal, 1 extended depth of focus, and 4 multifocal IOLs were evaluated. An off-the-shelf optical bench was used to simulate a distant (>50 m) car headlight and record images. A custom HDR system was constructed using an imaging photometer to simulate headlight images and to measure quantitative halo luminance data. A metric was developed to characterize halo luminance properties. Clinical relevance was investigated by correlating halo measurements to visual outcomes questionnaire data. Results The Optikos system produced halo images useful for visual comparisons; however, measurements were relative and not quantitative. The HDR halo system provided objective and quantitative measurements used to create a metric from the area under the curve (AUC) of the logarithmic normalized halo profile. This proposed metric differentiated between IOL models, and linear regression analysis found strong correlations between AUC and subjective clinical ratings of halos. Conclusion The HDR system produced quantitative, preclinical metrics that correlated to patients’ subjective perception of halos. PMID:29503526
Off-surface infrared flow visualization
NASA Technical Reports Server (NTRS)
Manuel, Gregory S. (Inventor); Obara, Clifford J. (Inventor); Daryabeigi, Kamran (Inventor); Alderfer, David W. (Inventor)
1993-01-01
A method for visualizing off-surface flows is provided. The method consists of releasing a gas with infrared absorbing and emitting characteristics into a fluid flow and imaging the flow with an infrared imaging system. This method allows for visualization of off-surface fluid flow in-flight. The novelty of this method is found in providing an apparatus for flow visualization which is contained within the aircraft so as not to disrupt the airflow around the aircraft, is effective at various speeds and altitudes, and is longer-lasting than previous methods of flow visualization.
System and method for image mapping and visual attention
NASA Technical Reports Server (NTRS)
Peters, II, Richard A. (Inventor)
2010-01-01
A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing attentional locations at each node.
System and method for image mapping and visual attention
NASA Technical Reports Server (NTRS)
Peters, II, Richard A. (Inventor)
2011-01-01
A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.
Design and implementation of a PC-based image-guided surgical system.
Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L
2002-11-01
In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.
MR imaging of the fetal musculoskeletal system.
Nemec, Stefan Franz; Nemec, Ursula; Brugger, Peter C; Bettelheim, Dieter; Rotmensch, Siegfried; Graham, John M; Rimoin, David L; Prayer, Daniela
2012-03-01
Magnetic resonance imaging (MRI) appears to be increasingly used, in addition to standard ultrasonography for the diagnosis of abnormalities in utero. Previous studies have recently drawn attention to the technical refinement of MRI to visualize the fetal bones and muscles. Beyond commonly used T2-weighted MRI, echoplanar, thick-slab T2-weighted and dynamic sequences, and three-dimensional MRI techniques, are about to provide new imaging insights into the normal and the pathological musculoskeletal system of the fetus. This review emphasizes the potential significance of MRI in the visualization of the fetal musculoskeletal system. © 2012 John Wiley & Sons, Ltd.
Client-side Medical Image Colorization in a Collaborative Environment.
Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela
2015-01-01
The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho
2015-07-01
The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes. Copyright © 2015 Elsevier Inc. All rights reserved.
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, J. C.
2017-05-01
The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.
A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera
Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji
2017-01-01
The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403
A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.
Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji
2017-02-04
The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.
NASA Technical Reports Server (NTRS)
Poulton, C. E.
1975-01-01
Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.
NASA Astrophysics Data System (ADS)
Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.
1995-04-01
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Specialized Computer Systems for Environment Visualization
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.
2018-06-01
The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.
Regional information guidance system based on hypermedia concept
NASA Astrophysics Data System (ADS)
Matoba, Hiroshi; Hara, Yoshinori; Kasahara, Yutako
1990-08-01
A regional information guidance system has been developed on an image workstation. Two main features of this system are hypermedia data structure and friendly visual interface realized by the full-color frame memory system. As the hypermedia data structure manages regional information such as maps, pictures and explanations of points of interest, users can retrieve those information one by one, next to next according to their interest change. For example, users can retrieve explanation of a picture through the link between pictures and text explanations. Users can also traverse from one document to another by using keywords as cross reference indices. The second feature is to utilize a full-color, high resolution and wide space frame memory for visual interface design. This frame memory system enables real-time operation of image data and natural scene representation. The system also provides half tone representing function which enables fade-in/out presentations. This fade-in/out functions used in displaying and erasing menu and image data, makes visual interface soft for human eyes. The system we have developed is a typical example of multimedia applications. We expect the image workstation will play an important role as a platform for multimedia applications.
Visual just noticeable differences
NASA Astrophysics Data System (ADS)
Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin
2018-02-01
A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Imaging of the human choroid with a 1.7 MHz A-scan rate FDML swept source OCT system
NASA Astrophysics Data System (ADS)
Gorczynska, I.; Migacz, J. V.; Jonnal, R.; Zawadzki, R. J.; Poddar, R.; Werner, J. S.
2017-02-01
We demonstrate OCT angiography (OCTA) and Doppler OCT imaging of the choroid in the eyes of two healthy volunteers and in a geographic atrophy case. We show that visualization of specific choroidal layers requires selection of appropriate OCTA methods. We investigate how imaging speed, B-scan averaging and scanning density influence visualization of various choroidal vessels. We introduce spatial power spectrum analysis of OCT en face angiographic projections as a method of quantitative analysis of choroicapillaris morphology. We explore the possibility of Doppler OCT imaging to provide information about directionality of blood flow in choroidal vessels. To achieve these goals, we have developed OCT systems utilizing an FDML laser operating at 1.7 MHz sweep rate, at 1060 nm center wavelength, and with 7.5 μm axial imaging resolution. A correlation mapping OCA method was implemented for visualization of the vessels. Joint Spectral and Time domain OCT (STdOCT) technique was used for Doppler OCT imaging.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
Early screening of an infant's visual system
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1999-06-01
It is of utmost importance to the development of the child's visual system that she perceives clear focused retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur--myopia and hyperopia can only cause important problems in the future when they are significantly large, however for the astigmatism (rather frequent in infants) and anisometropia the problems tend to be more stringent. The early evaluation of the visual status of human infants is thus of critical importance. Photorefraction is a convenient technique for this kind of subjects. Essentially a light beam is delivered into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The photorefraction setup we established using new technological breakthroughs on the fields of imaging devices, digital image processing and fiber optics, allows a fast noninvasive evaluation of children visual status (refractive errors, accommodation, strabismus, ...). Results of the visual screening of a group of risk' child descents of blinds or amblyopes will be presented.
A secure online image trading system for untrusted cloud environments.
Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi
2015-01-01
In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.
He, Longjun; Ming, Xing; Liu, Qian
2014-04-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
Real-time distortion correction for visual inspection systems based on FPGA
NASA Astrophysics Data System (ADS)
Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2008-03-01
Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.
Purkinje image eyetracking: A market survey
NASA Technical Reports Server (NTRS)
Christy, L. F.
1979-01-01
The Purkinje image eyetracking system was analyzed to determine the marketability of the system. The eyetracking system is a synthesis of two separate instruments, the optometer that measures the refractive power of the eye and the dual Purkinje image eyetracker that measures the direction of the visual axis.
Stochastic detecting images from strong noise field in visual communications
NASA Astrophysics Data System (ADS)
Cai, Defu
1991-11-01
Random noise interference in image pick-up and image transmission is an important restriction for vision systems. In this paper, interframe shift sampling (IFSS) transform has been used for diminishing noise interference and detecting weak image signal submerged by strong noise in communication systems.
Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto
2006-02-15
Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.
Besharati Tabrizi, Leila; Mahvash, Mehran
2015-07-01
An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1993-01-01
The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).
A visualization system for CT based pulmonary fissure analysis
NASA Astrophysics Data System (ADS)
Pu, Jiantao; Zheng, Bin; Park, Sang Cheol
2009-02-01
In this study we describe a visualization system of pulmonary fissures depicted on CT images. The purpose is to provide clinicians with an intuitive perception of a patient's lung anatomy through an interactive examination of fissures, enhancing their understanding and accurate diagnosis of lung diseases. This system consists of four key components: (1) region-of-interest segmentation; (2) three-dimensional surface modeling; (3) fissure type classification; and (4) an interactive user interface, by which the extracted fissures are displayed flexibly in different space domains including image space, geometric space, and mixed space using simple toggling "on" and "off" operations. In this system, the different visualization modes allow users not only to examine the fissures themselves but also to analyze the relationship between fissures and their surrounding structures. In addition, the users can adjust thresholds interactively to visualize the fissure surface under different scanning and processing conditions. Such a visualization tool is expected to facilitate investigation of structures near the fissures and provide an efficient "visual aid" for other applications such as treatment planning and assessment of therapeutic efficacy as well as education of medical professionals.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-08-01
Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Visual identification system for homeland security and law enforcement support
NASA Astrophysics Data System (ADS)
Samuel, Todd J.; Edwards, Don; Knopf, Michael
2005-05-01
This paper describes the basic configuration for a visual identification system (VIS) for Homeland Security and law enforcement support. Security and law enforcement systems with an integrated VIS will accurately and rapidly provide identification of vehicles or containers that have entered, exited or passed through a specific monitoring location. The VIS system stores all images and makes them available for recall for approximately one week. Images of alarming vehicles will be archived indefinitely as part of the alarming vehicle"s or cargo container"s record. Depending on user needs, the digital imaging information will be provided electronically to the individual inspectors, supervisors, and/or control center at the customer"s office. The key components of the VIS are the high-resolution cameras that capture images of vehicles, lights, presence sensors, image cataloging software, and image recognition software. In addition to the cameras, the physical integration and network communications of the VIS components with the balance of the security system and client must be ensured.
Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev
2010-01-01
Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.
Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2011-01-01
This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.
High-Speed PLIF Imaging of Hypersonic Transition over Discrete Cylindrical Roughness
NASA Technical Reports Server (NTRS)
Danehy, P. M.; Ivey, C. B.; Inman, J. A.; Bathel, B. F.; Jones, S. B.; McCrea, A. C.; Jiang, N.; Webster, M.; Lempert, W.; Miller, J.;
2010-01-01
In two separate test entries, advanced laser-based instrumentation has been developed and applied to visualize the hypersonic flow over cylindrical protrusions on a flat plate. Upstream of these trips, trace quantities of nitric oxide (NO) were seeded into the boundary layer. The protuberances were sized to force laminar-to-turbulent boundary layer transition. In the first test, a 10-Hz nitric oxide planar laser-induced fluorescence (NO PLIF) flow visualization system was used to provide wide-field-of-view, high-resolution images of the flowfield. The images had sub-microsecond time resolution. However these images, obtained with a time separation of 0.1 sec, were uncorrelated with each other. Fluorescent oil-flow visualizations were also obtained during this test. In the second experiment, a laser and camera system capable of acquiring NO PLIF measurements at 1 million frames per second (1 MHz) was used. This system had lower spatial resolution, and a smaller field of view, but the images were time correlated so that the development of the flow structures could be observed in time.
NASA Astrophysics Data System (ADS)
Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi
2018-04-01
In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.
Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steven J.
2004-01-01
A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.
Imaging of the interaction of cancer cells and the lymphatic system.
Tran Cao, Hop S; McElroy, Michele; Kaushal, Sharmeela; Hoffman, Robert M; Bouvet, Michael
2011-09-10
A thorough understanding of the lymphatic system and its interaction with cancer cells is crucial to our ability to fight cancer metastasis. Efforts to study the lymphatic system had previously been limited by the inability to visualize the lymphatic system in vivo in real time. Fluorescence imaging can address these limitations and allow for visualization of lymphatic delivery and trafficking of cancer cells and potentially therapeutic agents as well. Here, we review recent articles in which antibody-fluorophore conjugates are used to label the lymphatic network and fluorescent proteins to label cancer cells in the evaluation of lymphatic delivery and imaging. Copyright © 2011 Elsevier B.V. All rights reserved.
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
NASA Astrophysics Data System (ADS)
Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad
2010-02-01
A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.
Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.
Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele
2017-12-01
Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P < .05), a significant increase in optic radiation mean diffusivity (F, 7.5; P = .014), and a significant reduction in white matter concentration. VBM analysis also showed a significant reduction in visual cortical volumes (P < .05). Moreover, the chiasmatic area and volume were significantly reduced (P < .05). Conclusion The findings show that visual system alterations can be detected in early stages of Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.
Preserved figure-ground segregation and symmetry perception in visual neglect.
Driver, J; Baylis, G C; Rafal, R D
1992-11-05
A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
AMPS definition study on Optical Band Imager and Photometer System (OBIPS)
NASA Technical Reports Server (NTRS)
Davis, T. N.; Deehr, C. S.; Hallinan, T. J.; Wescott, E. M.
1975-01-01
A study was conducted to define the characteristics of a modular optical diagnostic system (OBIPS) for AMPS, to provide input to Phase B studies, and to give information useful for experiment planning and design of other instrumentation. The system described consists of visual and UV-band imagers and visual and UV-band photometers; of these the imagers are most important because of their ability to measure intensity as a function of two spatial dimensions and time with high resolution. The various subsystems of OBIPS are in themselves modular with modules having a high degree of interchangeability for versatility, economy, and redundancy.
Jiao, Yang; Xu, Liang; Gao, Min-Guang; Feng, Ming-Chun; Jin, Ling; Tong, Jing-Jing; Li, Sheng
2012-07-01
Passive remote sensing by Fourier-transform infrared (FTIR) spectrometry allows detection of air pollution. However, for the localization of a leak and a complete assessment of the situation in the case of the release of a hazardous cloud, information about the position and the distribution of a cloud is essential. Therefore, an imaging passive remote sensing system comprising an interferometer, a data acquisition and processing software, scan system, a video system, and a personal computer has been developed. The remote sensing of SF6 was done. The column densities of all directions in which a target compound has been identified may be retrieved by a nonlinear least squares fitting algorithm and algorithm of radiation transfer, and a false color image is displayed. The results were visualized by a video image, overlaid by false color concentration distribution image. The system has a high selectivity, and allows visualization and quantification of pollutant clouds.
Photoacoustic characterization of radiofrequency ablation lesions
NASA Astrophysics Data System (ADS)
Bouchard, Richard; Dana, Nicholas; Di Biase, Luigi; Natale, Andrea; Emelianov, Stanislav
2012-02-01
Radiofrequency ablation (RFA) procedures are used to destroy abnormal electrical pathways in the heart that can cause cardiac arrhythmias. Current methods relying on fluoroscopy, echocardiography and electrical conduction mapping are unable to accurately assess ablation lesion size. In an effort to better visualize RFA lesions, photoacoustic (PA) and ultrasonic (US) imaging were utilized to obtain co-registered images of ablated porcine cardiac tissue. The left ventricular free wall of fresh (i.e., never frozen) porcine hearts was harvested within 24 hours of the animals' sacrifice. A THERMOCOOLR Ablation System (Biosense Webster, Inc.) operating at 40 W for 30-60 s was used to induce lesions through the endocardial and epicardial walls of the cardiac samples. Following lesion creation, the ablated tissue samples were placed in 25 °C saline to allow for multi-wavelength PA imaging. Samples were imaged with a VevoR 2100 ultrasound system (VisualSonics, Inc.) using a modified 20-MHz array that could provide laser irradiation to the sample from a pulsed tunable laser (Newport Corp.) to allow for co-registered photoacoustic-ultrasound (PAUS) imaging. PA imaging was conducted from 750-1064 nm, with a surface fluence of approximately 15 mJ/cm2 maintained during imaging. In this preliminary study with PA imaging, the ablated region could be well visualized on the surface of the sample, with contrasts of 6-10 dB achieved at 750 nm. Although imaging penetration depth is a concern, PA imaging shows promise in being able to reliably visualize RF ablation lesions.
NASA Technical Reports Server (NTRS)
Brown, Alison M.
2005-01-01
Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.
Ehlers, Justis P.; Tao, Yuankai K.; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A.
2011-01-01
Purpose. To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. Methods. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board–approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. Results. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). Conclusions. High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon. PMID:21282565
Ehlers, Justis P; Tao, Yuankai K; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A; Toth, Cynthia A
2011-05-16
To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board-approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Visual quality analysis for images degraded by different types of noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.
2013-02-01
Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.
Srinivasan, Vivek J.; Adler, Desmond C.; Chen, Yueli; Gorczynska, Iwona; Huber, Robert; Duker, Jay S.; Schuman, Joel S.; Fujimoto, James G.
2009-01-01
Purpose To demonstrate ultrahigh-speed optical coherence tomography (OCT) imaging of the retina and optic nerve head at 249,000 axial scans per second and a wavelength of 1060 nm. To investigate methods for visualization of the retina, choroid, and optic nerve using high-density sampling enabled by improved imaging speed. Methods A swept-source OCT retinal imaging system operating at a speed of 249,000 axial scans per second was developed. Imaging of the retina, choroid, and optic nerve were performed. Display methods such as speckle reduction, slicing along arbitrary planes, en face visualization of reflectance from specific retinal layers, and image compounding were investigated. Results High-definition and three-dimensional (3D) imaging of the normal retina and optic nerve head were performed. Increased light penetration at 1060 nm enabled improved visualization of the choroid, lamina cribrosa, and sclera. OCT fundus images and 3D visualizations were generated with higher pixel density and less motion artifacts than standard spectral/Fourier domain OCT. En face images enabled visualization of the porous structure of the lamina cribrosa, nerve fiber layer, choroid, photoreceptors, RPE, and capillaries of the inner retina. Conclusions Ultrahigh-speed OCT imaging of the retina and optic nerve head at 249,000 axial scans per second is possible. The improvement of ∼5 to 10× in imaging speed over commercial spectral/Fourier domain OCT technology enables higher density raster scan protocols and improved performance of en face visualization methods. The combination of the longer wavelength and ultrahigh imaging speed enables excellent visualization of the choroid, sclera, and lamina cribrosa. PMID:18658089
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Data, Analysis, and Visualization | Computational Science | NREL
Data, Analysis, and Visualization Data, Analysis, and Visualization Data management, data analysis . At NREL, our data management, data analysis, and scientific visualization capabilities help move the approaches to image analysis and computer vision. Data Management and Big Data Systems, software, and tools
BIM-Sim: Interactive Simulation of Broadband Imaging Using Mie Theory
Berisha, Sebastian; van Dijk, Thomas; Bhargava, Rohit; Carney, P. Scott; Mayerich, David
2017-01-01
Understanding the structure of a scattered electromagnetic (EM) field is critical to improving the imaging process. Mechanisms such as diffraction, scattering, and interference affect an image, limiting the resolution, and potentially introducing artifacts. Simulation and visualization of scattered fields thus plays an important role in imaging science. However, EM fields are high-dimensional, making them time-consuming to simulate, and difficult to visualize. In this paper, we present a framework for interactively computing and visualizing EM fields scattered by micro and nano-particles. Our software uses graphics hardware for evaluating the field both inside and outside of these particles. We then use Monte-Carlo sampling to reconstruct and visualize the three-dimensional structure of the field, spectral profiles at individual points, the structure of the field at the surface of the particle, and the resulting image produced by an optical system. PMID:29170738
ERIC Educational Resources Information Center
Tallman, Oliver H.
A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…
Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug; Baek, Jung Hwan; Choi, Young Jun; Ha, Eun Ju; Lee, Kang Dae; Lee, Hyoung Shin; Shin, DaeSeock; Kim, Nakyoung
2016-01-01
To develop a semiautomated computer-aided diagnosis (cad) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid cad software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrence matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of cad with visual inspection by expert radiologists based on established gold standards. Most univariate features for this proposed cad system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed cad system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, "axial ratio" and "max probability" in axial images were most frequently included in the optimal feature sets for the authors' proposed cad system, while "shape" and "calcification" in longitudinal images were most frequently included in the optimal feature sets for visual inspection by radiologists. The computed areas under curves in the ROC analysis were 0.986 and 0.979 for the proposed cad system and visual inspection by radiologists, respectively; no significant difference was detected between these groups. The use of thyroid cad to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists. Thyroid cad might be considered a viable way to generate a second opinion for radiologists in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug, E-mail: namkugkim@gmail.com
Purpose: To develop a semiautomated computer-aided diagnosis (CAD) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. Methods: A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid CAD software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrencemore » matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of CAD with visual inspection by expert radiologists based on established gold standards. Results: Most univariate features for this proposed CAD system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed CAD system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, “axial ratio” and “max probability” in axial images were most frequently included in the optimal feature sets for the authors’ proposed CAD system, while “shape” and “calcification” in longitudinal images were most frequently included in the optimal feature sets for visual inspection by radiologists. The computed areas under curves in the ROC analysis were 0.986 and 0.979 for the proposed CAD system and visual inspection by radiologists, respectively; no significant difference was detected between these groups. Conclusions: The use of thyroid CAD to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists. Thyroid CAD might be considered a viable way to generate a second opinion for radiologists in clinical practice.« less
Uncertainty Comparison of Visual Sensing in Adverse Weather Conditions†
Lo, Shi-Wei; Wu, Jyh-Horng; Chen, Lun-Chi; Tseng, Chien-Hao; Lin, Fang-Pang; Hsu, Ching-Han
2016-01-01
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and RegGro. For each growing rule, several tests on adverse weather and lens-stained scenes were performed, taking into account and analyzing different weather conditions with the outdoor visual sensing system. The influence of several weather conditions was analyzed, highlighting their effect on the outdoor visual sensing system with different growing rules. Furthermore, experimental errors and uncertainties obtained with the growing rules were compared. The segmentation accuracy of flood regions yielded by the GrowCut, RegGro, and hybrid methods was 75%, 85%, and 87.7%, respectively. PMID:27447642
A Closed Circuit TV System for the Visually Handicapped and Prospects for Future Research.
ERIC Educational Resources Information Center
Genensky, S. M.; And Others
Some visually handicapped persons have difficulty reading or writing even with the aid of eyeglasses, but could be helped by visual aids which increase image magnification, light intensity or brightness, or some combination of these factors. The system described here uses closed circuit television (CCTV) to provide variable magnification from 1.4x…
NASA Technical Reports Server (NTRS)
1977-01-01
A preliminary design for a helicopter/VSTOL wide angle simulator image generation display system is studied. The visual system is to become part of a simulator capability to support Army aviation systems research and development within the near term. As required for the Army to simulate a wide range of aircraft characteristics, versatility and ease of changing cockpit configurations were primary considerations of the study. Due to the Army's interest in low altitude flight and descents into and landing in constrained areas, particular emphasis is given to wide field of view, resolution, brightness, contrast, and color. The visual display study includes a preliminary design, demonstrated feasibility of advanced concepts, and a plan for subsequent detail design and development. Analysis and tradeoff considerations for various visual system elements are outlined and discussed.
Contrast statistics for foveated visual systems: fixation selection by minimizing contrast entropy
NASA Astrophysics Data System (ADS)
Raj, Raghu; Geisler, Wilson S.; Frazor, Robert A.; Bovik, Alan C.
2005-10-01
The human visual system combines a wide field of view with a high-resolution fovea and uses eye, head, and body movements to direct the fovea to potentially relevant locations in the visual scene. This strategy is sensible for a visual system with limited neural resources. However, for this strategy to be effective, the visual system needs sophisticated central mechanisms that efficiently exploit the varying spatial resolution of the retina. To gain insight into some of the design requirements of these central mechanisms, we have analyzed the effects of variable spatial resolution on local contrast in 300 calibrated natural images. Specifically, for each retinal eccentricity (which produces a certain effective level of blur), and for each value of local contrast observed at that eccentricity, we measured the probability distribution of the local contrast in the unblurred image. These conditional probability distributions can be regarded as posterior probability distributions for the ``true'' unblurred contrast, given an observed contrast at a given eccentricity. We find that these conditional probability distributions are adequately described by a few simple formulas. To explore how these statistics might be exploited by central perceptual mechanisms, we consider the task of selecting successive fixation points, where the goal on each fixation is to maximize total contrast information gained about the image (i.e., minimize total contrast uncertainty). We derive an entropy minimization algorithm and find that it performs optimally at reducing total contrast uncertainty and that it also works well at reducing the mean squared error between the original image and the image reconstructed from the multiple fixations. Our results show that measurements of local contrast alone could efficiently drive the scan paths of the eye when the goal is to gain as much information about the spatial structure of a scene as possible.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Multimode intravascular RF coil for MRI-guided interventions.
Kurpad, Krishna N; Unal, Orhan
2011-04-01
To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.
Twellmann, Thorsten; Meyer-Baese, Anke; Lange, Oliver; Foo, Simon; Nattkemper, Tim W.
2008-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has become an important tool in breast cancer diagnosis, but evaluation of multitemporal 3D image data holds new challenges for human observers. To aid the image analysis process, we apply supervised and unsupervised pattern recognition techniques for computing enhanced visualizations of suspicious lesions in breast MRI data. These techniques represent an important component of future sophisticated computer-aided diagnosis (CAD) systems and support the visual exploration of spatial and temporal features of DCE-MRI data stemming from patients with confirmed lesion diagnosis. By taking into account the heterogeneity of cancerous tissue, these techniques reveal signals with malignant, benign and normal kinetics. They also provide a regional subclassification of pathological breast tissue, which is the basis for pseudo-color presentations of the image data. Intelligent medical systems are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging. PMID:19255616
Mountain building processes in the Central Andes
NASA Technical Reports Server (NTRS)
Bloom, A. L.; Isacks, B. L.
1986-01-01
False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.
Mountain building processes in the Central Andes
NASA Astrophysics Data System (ADS)
Bloom, A. L.; Isacks, B. L.
False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.
Development of image processing techniques for applications in flow visualization and analysis
NASA Technical Reports Server (NTRS)
Disimile, Peter J.; Shoe, Bridget; Toy, Norman; Savory, Eric; Tahouri, Bahman
1991-01-01
A comparison between two flow visualization studies of an axi-symmetric circular jet issuing into still fluid, using two different experimental techniques, is described. In the first case laser induced fluorescence is used to visualize the flow structure, whilst smoke is utilized in the second. Quantitative information was obtained from these visualized flow regimes using two different digital imaging systems. Results are presented of the rate at which the jet expands in the downstream direction and these compare favorably with the more established data.
Holodeck: Telepresence Dome Visualization System Simulations
NASA Technical Reports Server (NTRS)
Hite, Nicolas
2012-01-01
This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
Luo, Yuan; Gelsinger-Austin, Paul J; Watson, Jonathan M; Barbastathis, George; Barton, Jennifer K; Kostuk, Raymond K
2008-09-15
A three-dimensional imaging system incorporating multiplexed holographic gratings to visualize fluorescence tissue structures is presented. Holographic gratings formed in volume recording materials such as a phenanthrenquinone poly(methyl methacrylate) photopolymer have narrowband angular and spectral transmittance filtering properties that enable obtaining spatial-spectral information within an object. We demonstrate this imaging system's ability to obtain multiple depth-resolved fluorescence images simultaneously.
Reconfigurable Image Generator
NASA Technical Reports Server (NTRS)
Archdeacon, John L. (Inventor); Iwai, Nelson H. (Inventor); Kato, Kenji H. (Inventor); Sweet, Barbara T. (Inventor)
2017-01-01
A RiG may simulate visual conditions of a real world environment, and generate the necessary amount of pixels in a visual simulation at rates up to 120 frames per second. RiG may also include a database generation system capable of producing visual databases suitable to drive the visual fidelity required by the RiG.
The Ecological Approach to Text Visualization.
ERIC Educational Resources Information Center
Wise, James A.
1999-01-01
Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Local image statistics: maximum-entropy constructions and perceptual salience
Victor, Jonathan D.; Conte, Mary M.
2012-01-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397
NASA Technical Reports Server (NTRS)
Youngquist, Robert C. (Inventor); Moerk, Steven (Inventor)
1999-01-01
An imaging system is described which can be used to either passively search for sources of ultrasonics or as an active phase imaging system. which can image fires. gas leaks, or air temperature gradients. This system uses an array of ultrasonic receivers coupled to an ultrasound collector or lens to provide an electronic image of the ultrasound intensity in a selected angular region of space. A system is described which includes a video camera to provide a visual reference to a region being examined for ultrasonic signals.
Physics and psychophysics of color reproduction
NASA Astrophysics Data System (ADS)
Giorgianni, Edward J.
1991-08-01
The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.
A Forest Landscape Visualization System
Tim McDonald; Bryce Stokes
1998-01-01
A forest landscape visualization system was developed and used in creating realistic images depicting how an area might appear if harvested. The system uses a ray-tracing renderer to draw model trees on a virtual landscape. The system includes components to create landscape surfaces from digital elevation data, populate/cut trees within (polygonal) areas, and convert...
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
NASA Astrophysics Data System (ADS)
Gomes, Gary G.
1986-05-01
A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.
NASA Astrophysics Data System (ADS)
Jing, Joseph C.; Chou, Lidek; Su, Erica; Wong, Brian J. F.; Chen, Zhongping
2016-12-01
The upper airway is a complex tissue structure that is prone to collapse. Current methods for studying airway obstruction are inadequate in safety, cost, or availability, such as CT or MRI, or only provide localized qualitative information such as flexible endoscopy. Long range optical coherence tomography (OCT) has been used to visualize the human airway in vivo, however the limited imaging range has prevented full delineation of the various shapes and sizes of the lumen. We present a new long range OCT system that integrates high speed imaging with a real-time position tracker to allow for the acquisition of an accurate 3D anatomical structure in vivo. The new system can achieve an imaging range of 30 mm at a frame rate of 200 Hz. The system is capable of generating a rapid and complete visualization and quantification of the airway, which can then be used in computational simulations to determine obstruction sites.
Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI
NASA Astrophysics Data System (ADS)
Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg
2012-03-01
Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
Scientific Visualization and Computational Science: Natural Partners
NASA Technical Reports Server (NTRS)
Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Sugita, Norihiro; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Watanabe, Takashi; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi
2007-09-28
Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index rho(max), which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in rho(max) with time. The physiological index, rho(max), will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Steato-Score: Non-Invasive Quantitative Assessment of Liver Fat by Ultrasound Imaging.
Di Lascio, Nicole; Avigo, Cinzia; Salvati, Antonio; Martini, Nicola; Ragucci, Monica; Monti, Serena; Prinster, Anna; Chiappino, Dante; Mancini, Marcello; D'Elia, Domenico; Ghiadoni, Lorenzo; Bonino, Ferruccio; Brunetto, Maurizia R; Faita, Francesco
2018-05-04
Non-alcoholic fatty liver disease is becoming a global epidemic. The aim of this study was to develop a system for assessing liver fat content based on ultrasound images. Magnetic resonance spectroscopy measurements were obtained in 61 patients and the controlled attenuation parameter in 54. Ultrasound images were acquired for all 115 participants and used to calculate the hepatic/renal ratio, hepatic/portal vein ratio, attenuation rate, diaphragm visualization and portal vein wall visualization. The Steato-score was obtained by combining these five parameters. Magnetic resonance spectroscopy measurements were significantly correlated with hepatic/renal ratio, hepatic/portal vein ratio, attenuation rate, diaphragm visualization and portal vein wall visualization; Steato-score was dependent on hepatic/renal ratio, attenuation rate and diaphragm visualization. Area under the receiver operating characteristic curve was equal to 0.98, with 89% sensitivity and 94% specificity. Controlled attenuation parameter values were significantly correlated with hepatic/renal ratio, attenuation rate, diaphragm visualization and Steato-score; the area under the curve was 0.79. This system could be a valid alternative as a non-invasive, simple and inexpensive assessment of intrahepatic fat. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Thermoacoustic imaging of fresh prostates up to 6-cm diameter
NASA Astrophysics Data System (ADS)
Patch, S. K.; Hanson, E.; Thomas, M.; Kelly, H.; Jacobsohn, K.; See, W. A.
2013-03-01
Thermoacoustic (TA) imaging provides a novel contrast mechanism that may enable visualization of cancerous lesions which are not robustly detected by current imaging modalities. Prostate cancer (PCa) is the most notorious example. Imaging entire prostate glands requires 6 cm depth penetration. We therefore excite TA signal using submicrosecond VHF pulses (100 MHz). We will present reconstructions of fresh prostates imaged in a well-controlled benchtop TA imaging system. Chilled glycine solution is used as acoustic couplant. The urethra is routinely visualized as signal dropout; surgical staples formed from 100-micron wide wire bent to 3 mm length generate strong positive signal.
Characteristics of flight simulator visual systems
NASA Technical Reports Server (NTRS)
Statler, I. C. (Editor)
1981-01-01
The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality.
JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.
Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun
2017-03-01
Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.
Research and analysis of head-directed area-of-interest visual system concepts
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1983-01-01
An analysis and survey with conjecture supporting a preliminary data base design is presented. The data base is intended for use in a Computer Image Generator visual subsystem for a rotorcraft flight simulator that is used for rotorcraft systems development, not training. The approach taken was to attempt to identify the visual perception strategies used during terrain flight, survey environmental and image generation factors, and meld these into a preliminary data base design. This design is directed at Data Base developers, and hopefully will stimulate and aid their efforts to evolve such a Base that will support simulation of terrain flight operations.
Anterior-segment imaging for assessment of glaucoma
Ursea, Roxana; Silverman, Ronald H
2010-01-01
This article summarizes the physics, technology and clinical application of ultrasound biomicroscopy (UBM) and optical coherence tomography (OCT) for assessment of the anterior segment in glaucoma. UBM systems use frequencies ranging from approximately 35 to 80 MHz, as compared with typical 10-MHz systems used for general-purpose ophthalmic imaging. OCT systems use low-coherence, near-infrared light to provide detailed images of anterior segment structures at resolutions exceeding that of UBM. Both technologies allow visualization of the iridocorneal angle and, thus, can contribute to the diagnosis and management of glaucoma. OCT systems are advantageous, being noncontact proceedures and providing finer resolution than UBM, but UBM systems are superior for the visualization of retroiridal structures, including the ciliary body, posterior chamber and zonules, which can provide crucial diagnostic information for the assessment of glaucoma. PMID:20305726
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
Retinal Information Processing for Minimum Laser Lesion Detection and Cumulative Damage
1992-09-17
TAL3Unaqr~orJ:ccd [] J ,;--Wicic tion --------------... MYRON....... . ................... ... ....... ...........................MYRON L. WOLBARSHT B D ist...possible beneficial visual function of the small retinal image movements. B . Visual System Models Prior models of visual system information processing have...against standard secondary sources whose calibrations can be traced to the National Bureau of Standards. B . Electrophysiological Techniques Extracellular
Validating tyrosinase homologue melA as a photoacoustic reporter gene for imaging Escherichia coli
NASA Astrophysics Data System (ADS)
Paproski, Robert J.; Li, Yan; Barber, Quinn; Lewis, John D.; Campbell, Robert E.; Zemp, Roger
2015-10-01
To understand the pathogenic processes for infectious bacteria, appropriate research tools are required for replicating and characterizing infections. Fluorescence and bioluminescence imaging have primarily been used to image infections in animal models, but optical scattering in tissue significantly limits imaging depth and resolution. Photoacoustic imaging, which has improved depth-to-resolution ratio compared to conventional optical imaging, could be useful for visualizing melA-expressing bacteria since melA is a bacterial tyrosinase homologue which produces melanin. Escherichia coli-expressing melA was visibly dark in liquid culture. When melA-expressing bacteria in tubes were imaged with a VisualSonics Vevo LAZR system, the signal-to-noise ratio of a 9× dilution sample was 55, suggesting that ˜20 bacteria cells could be detected with our system. Multispectral (680, 700, 750, 800, 850, and 900 nm) analysis of the photoacoustic signal allowed unmixing of melA-expressing bacteria from blood. To compare photoacoustic reporter gene melA (using Vevo system) with luminescent and fluorescent reporter gene Nano-lantern (using Bruker Xtreme In-Vivo system), tubes of bacteria expressing melA or Nano-lantern were submerged 10 mm in 1% Intralipid, spaced between <1 and 20 mm apart from each other, and imaged with the appropriate imaging modality. Photoacoustic imaging could resolve the two tubes of melA-expressing bacteria even when the tubes were less than 1 mm from each other, while bioluminescence and fluorescence imaging could not resolve the two tubes of Nano-lantern-expressing bacteria even when the tubes were spaced 10 mm from each other. After injecting 100-μL of melA-expressing bacteria in the back flank of a chicken embryo, photoacoustic imaging allowed visualization of melA-expressing bacteria up to 10-mm deep into the embryo. Photoacoustic signal from melA could also be separated from deoxy- and oxy-hemoglobin signal observed within the embryo and chorioallantoic membrane. Our results suggest that melA is a useful photoacoustic reporter gene for visualizing bacteria, and further work incorporating photoacoustic reporters into infectious bacterial strains is warranted.
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau
2009-01-01
Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398
Tu, Joanna H; Foote, Katharina G; Lujan, Brandon J; Ratnam, Kavitha; Qin, Jia; Gorin, Michael B; Cunningham, Emmett T; Tuten, William S; Duncan, Jacque L; Roorda, Austin
2017-09-01
Confocal adaptive optics scanning laser ophthalmoscope (AOSLO) images provide a sensitive measure of cone structure. However, the relationship between structural findings of diminished cone reflectivity and visual function is unclear. We used fundus-referenced testing to evaluate visual function in regions of apparent cone loss identified using confocal AOSLO images. A patient diagnosed with acute bilateral foveolitis had spectral-domain optical coherence tomography (SD-OCT) (Spectralis HRA + OCT system [Heidelberg Engineering, Vista, CA, USA]) images indicating focal loss of the inner segment-outer segment junction band with an intact, but hyper-reflective, external limiting membrane. Five years after symptom onset, visual acuity had improved from 20/80 to 20/25, but the retinal appearance remained unchanged compared to 3 months after symptoms began. We performed structural assessments using SD-OCT, directional OCT (non-standard use of a prototype on loan from Carl Zeiss Meditec) and AOSLO (custom-built system). We also administered fundus-referenced functional tests in the region of apparent cone loss, including analysis of preferred retinal locus (PRL), AOSLO acuity, and microperimetry with tracking SLO (TSLO) (prototype system). To determine AOSLO-corrected visual acuity, the scanning laser was modulated with a tumbling E consistent with 20/30 visual acuity. Visual sensitivity was assessed in and around the lesion using TSLO microperimetry. Complete eye examination, including standard measures of best-corrected visual acuity, visual field tests, color fundus photos, and fundus auto-fluorescence were also performed. Despite a lack of visible cone profiles in the foveal lesion, fundus-referenced vision testing demonstrated visual function within the lesion consistent with cone function. The PRL was within the lesion of apparent cone loss at the fovea. AOSLO visual acuity tests were abnormal, but measurable: for trials in which the stimulus remained completely within the lesion, the subject got 48% correct, compared to 78% correct when the stimulus was outside the lesion. TSLO microperimetry revealed reduced, but detectible, sensitivity thresholds within the lesion. Fundus-referenced visual testing proved useful to identify functional cones despite apparent photoreceptor loss identified using AOSLO and SD-OCT. While AOSLO and SD-OCT appear to be sensitive for the detection of abnormal or absent photoreceptors, changes in photoreceptors that are identified with these imaging tools do not correlate completely with visual function in every patient. Fundus-referenced vision testing is a useful tool to indicate the presence of cones that may be amenable to recovery or response to experimental therapies despite not being visible on confocal AOSLO or SD-OCT images.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Model-based quantification of image quality
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.
1989-01-01
In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.
Using video playbacks to study visual communication in a marine fish, Salaria pavo.
Gonçalves; Oliveira; Körner; Poschadel; Schlupp
2000-09-01
Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
Web-based visualization of very large scientific astronomy imagery
NASA Astrophysics Data System (ADS)
Bertin, E.; Pillay, R.; Marmo, C.
2015-04-01
Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Visual Based Retrieval Systems and Web Mining--Introduction.
ERIC Educational Resources Information Center
Iyengar, S. S.
2001-01-01
Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)
Dynamic optical projection of acquired luminescence for aiding oncologic surgery
NASA Astrophysics Data System (ADS)
Sarder, Pinaki; Gullicksrud, Kyle; Mondal, Suman; Sudlow, Gail P.; Achilefu, Samuel; Akers, Walter J.
2013-12-01
Optical imaging enables real-time visualization of intrinsic and exogenous contrast within biological tissues. Applications in human medicine have demonstrated the power of fluorescence imaging to enhance visualization in dermatology, endoscopic procedures, and open surgery. Although few optical contrast agents are available for human medicine at this time, fluorescence imaging is proving to be a powerful tool in guiding medical procedures. Recently, intraoperative detection of fluorescent molecular probes that target cell-surface receptors has been reported for improvement in oncologic surgery in humans. We have developed a novel system, optical projection of acquired luminescence (OPAL), to further enhance real-time guidance of open oncologic surgery. In this method, collected fluorescence intensity maps are projected onto the imaged surface rather than via wall-mounted display monitor. To demonstrate proof-of-principle for OPAL applications in oncologic surgery, lymphatic transport of indocyanine green was visualized in live mice for intraoperative identification of sentinel lymph nodes. Subsequently, peritoneal tumors in a murine model of breast cancer metastasis were identified using OPAL after systemic administration of a tumor-selective fluorescent molecular probe. These initial results clearly show that OPAL can enhance adoption and ease-of-use of fluorescence imaging in oncologic procedures relative to existing state-of-the-art intraoperative imaging systems.
Natural language processing and visualization in the molecular imaging domain.
Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol
2007-06-01
Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.
Visual difference metric for realistic image synthesis
NASA Astrophysics Data System (ADS)
Bolin, Mark R.; Meyer, Gary W.
1999-05-01
An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Fukuda, Hiroyuki; Numata, Kazushi; Nozaki, Akito; Kondo, Masaaki; Morimoto, Manabu; Maeda, Shin; Tanaka, Katsuaki; Ohto, Masao; Ito, Ryu; Ishibashi, Yoshiharu; Oshima, Noriyoshi; Ito, Ayao; Zhu, Hui; Wang, Zhi-Biao
2013-12-01
We evaluated the usefulness of color Doppler flow imaging to compensate for the inadequate resolution of the ultrasound (US) monitoring during high-intensity focused ultrasound (HIFU) for the treatment of hepatocellular carcinoma (HCC). US-guided HIFU ablation assisted using color Doppler flow imaging was performed in 11 patients with small HCC (<3 lesions, <3 cm in diameter). The HIFU system (Chongqing Haifu Tech) was used under US guidance. Color Doppler sonographic studies were performed using an HIFU 6150S US imaging unit system and a 2.7-MHz electronic convex probe. The color Doppler images were used because of the influence of multi-reflections and the emergence of hyperecho. In 1 of the 11 patients, multi-reflections were responsible for the poor visualization of the tumor. In 10 cases, the tumor was poorly visualized because of the emergence of a hyperecho. In these cases, the ability to identify the original tumor location on the monitor by referencing the color Doppler images of the portal vein and the hepatic vein was very useful. HIFU treatments were successfully performed in all 11 patients with the assistance of color Doppler imaging. Color Doppler imaging is useful for the treatment of HCC using HIFU, compensating for the occasionally poor visualization provided by B-mode conventional US imaging.
Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.
2014-01-01
Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed
Cognitive approaches for patterns analysis and security applications
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Ogiela, Lidia
2017-08-01
In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
The eye and visual nervous system: anatomy, physiology and toxicology.
McCaa, C S
1982-01-01
The eyes are at risk to environmental injury by direct exposure to airborne pollutants, to splash injury from chemicals and to exposure via the circulatory system to numerous drugs and bloodborne toxins. In addition, drugs or toxins can destroy vision by damaging the visual nervous system. This review describes the anatomy and physiology of the eye and visual nervous system and includes a discussion of some of the more common toxins affecting vision in man. Images FIGURE 1. FIGURE 2. PMID:7084144
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2011-01-01
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by 4p . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this paper we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo
2015-01-01
Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Barnett, Barry S.; Bovik, Alan C.
1995-04-01
This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.
NASA Astrophysics Data System (ADS)
Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron
2017-03-01
Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.
[Design of visualized medical images network and web platform based on MeVisLab].
Xiang, Jun; Ye, Qing; Yuan, Xun
2017-04-01
With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
NASA Astrophysics Data System (ADS)
Yamamoto, Shoji; Hosokawa, Natsumi; Yokoya, Mayu; Tsumura, Norimichi
2016-12-01
In this paper, we investigated the consistency of visual perception for the change of reflection images in an augmented reality setting. Reflection images with distortion and magnification were generated by changing the capture position of the environment map. Observers evaluated the distortion and magnification in reflection images where the reflected objects were arranged symmetrically or asymmetrically. Our results confirmed that the observers' visual perception was more sensitive to changes in distortion than in magnification in the reflection images. Moreover, the asymmetrical arrangement of reflected objects effectively expands the acceptable range of distortion compared with the symmetrical arrangement.
NASA Astrophysics Data System (ADS)
Yang, Guiyan; Wang, Qingyan; Liu, Chen; Wang, Xiaobin; Fan, Shuxiang; Huang, Wenqian
2018-07-01
Rapid and visual detection of the chemical compositions of plant seeds is important but difficult for a traditional seed quality analysis system. In this study, a custom-designed line-scan Raman hyperspectral imaging system was applied for detecting and displaying the main chemical compositions in a heterogeneous maize seed. Raman hyperspectral images collected from the endosperm and embryo of maize seed were acquired and preprocessed by Savitzky-Golay (SG) filter and adaptive iteratively reweighted Penalized Least Squares (airPLS). Three varieties of maize seeds were analyzed, and the characteristics of the spectral and spatial information were extracted from each hyperspectral image. The Raman characteristic peaks, identified at 477, 1443, 1522, 1596 and 1654 cm-1 from 380 to 1800 cm-1 Raman spectra, were related to corn starch, mixture of oil and starch, zeaxanthin, lignin and oil in maize seeds, respectively. Each single-band image corresponding to the characteristic band characterized the spatial distribution of the chemical composition in a seed successfully. The embryo was distinguished from the endosperm by band operation of the single-band images at 477, 1443, and 1596 cm-1 for each variety. Results showed that Raman hyperspectral imaging system could be used for on-line quality control of maize seeds based on the rapid and visual detection of the chemical compositions in maize seeds.
Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.
Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K
2009-06-01
The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
Real-time Magnetic Resonance Imaging Guidance for Cardiovascular Procedures
Horvath, Keith A.; Li, Ming; Mazilu, Dumitru; Guttman, Michael A.; McVeigh, Elliot R.
2008-01-01
Magnetic resonance imaging (MRI) of the cardiovascular system has proven to be an invaluable diagnostic tool. Given the ability to allow for real-time imaging, MRI guidance of intraoperative procedures can provide superb visualization which can facilitate a variety of interventions and minimize the trauma of the operations as well. In addition to the anatomic detail, MRI can provide intraoperative assessment of organ and device function. Instruments and devices can be marked to enhance visualization and tracking. All of which is an advance over standard x-ray or ultrasonic imaging. PMID:18395633
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
Hadoop-based implementation of processing medical diagnostic records for visual patient system
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo
2018-03-01
We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.
Multi-mode Intravascular RF Coil for MRI-guided Interventions
Kurpad, Krishna N.; Unal, Orhan
2011-01-01
Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Enhancing online timeline visualizations with events and images
NASA Astrophysics Data System (ADS)
Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee
2011-01-01
The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.
Micro-CT images reconstruction and 3D visualization for small animal studying
NASA Astrophysics Data System (ADS)
Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng
2005-01-01
A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Hoffmann, M B; Kaule, F; Grzeschik, R; Behrens-Baumann, W; Wolynski, B
2011-07-01
Since its initial introduction in the mid-1990 s, retinotopic mapping of the human visual cortex, based on functional magnetic resonance imaging (fMRI), has contributed greatly to our understanding of the human visual system. Multiple cortical visual field representations have been demonstrated and thus numerous visual areas identified. The organisation of specific areas has been detailed and the impact of pathophysiologies of the visual system on the cortical organisation uncovered. These results are based on investigations at a magnetic field strength of 3 Tesla or less. In a field-strength comparison between 3 and 7 Tesla, it was demonstrated that retinotopic mapping benefits from a magnetic field strength of 7 Tesla. Specifically, the visual areas can be mapped with high spatial resolution for a detailed analysis of the visual field maps. Applications of fMRI-based retinotopic mapping in ophthalmological research hold promise to further our understanding of plasticity in the human visual cortex. This is highlighted by pioneering studies in patients with macular dysfunction or misrouted optic nerves. © Georg Thieme Verlag KG Stuttgart · New York.
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
NASA Astrophysics Data System (ADS)
Nishiyama, Misaki; Namita, Takeshi; Kondo, Kengo; Yamakawa, Makoto; Shiina, Tsuyoshi
2018-02-01
For early diagnosis of rheumatoid arthritis (RA), it is important to visualize its potential marker, vascularization in the synovial membrane of the finger joints. Photoacoustic (PA) imaging, which can image blood vessels at high contrast and resolution is expected to be a potential modality for earlier diagnosis of RA. In previous studies of PA finger imaging, different acoustic schemes such as linear or arc-shaped arrays have been utilized, but these have limited detection views, rendering inaccurate reconstruction, and most of them require rotational detection. We are developing a photoacoustic system for finger vascular imaging using a ring-shaped array ultrasound transducer. By designing the ring-array based on simulations and phantom experiments, we have created a system that can image multiple objects of different diameters and has the potential to image small objects 0.1-0.5mm in diameter at accurate positions by providing PA and ultrasound echo images simultaneously. In addition, we determined that full width at half maximum (FWHM) of the slice direction corresponded to that of the simulation. In the future, this system may visualize the 3-D vascularization of RA patients' fingers.
Azorin-Lopez, Jorge; Fuster-Guillo, Andres; Saval-Calvo, Marcelo; Mora-Mora, Higinio; Garcia-Chamizo, Juan Manuel
2017-01-01
The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces. PMID:28640211
Integrating advanced visualization technology into the planetary Geoscience workflow
NASA Astrophysics Data System (ADS)
Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb
2011-09-01
Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.
Teaching physics and understanding infrared thermal imaging
NASA Astrophysics Data System (ADS)
Vollmer, Michael; Möllmann, Klaus-Peter
2017-08-01
Infrared thermal imaging is a very rapidly evolving field. The latest trends are small smartphone IR camera accessories, making infrared imaging a widespread and well-known consumer product. Applications range from medical diagnosis methods via building inspections and industrial predictive maintenance etc. also to visualization in the natural sciences. Infrared cameras do allow qualitative imaging and visualization but also quantitative measurements of the surface temperatures of objects. On the one hand, they are a particularly suitable tool to teach optics and radiation physics and many selected topics in different fields of physics, on the other hand there is an increasing need of engineers and physicists who understand these complex state of the art photonics systems. Therefore students must also learn and understand the physics underlying these systems.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.
2012-10-02
An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less
[Use of blue and green systems of image visualization in roentgenology].
Riuduger, Iu G
2004-01-01
The main peculiarities of two image visualization systems related with the specificity of intensifying screens and of radiographic films in each of them are discussed. Specific features of kinetic development of modern orthochromatic general-purpose radiographic films were studied versus those of the traditional films; differences related with radiation hardness of some of the intensifying screen manufactured in Russia were investigated. Some practical advice was suggested on the basis of a conducted analysis of the "green" system specificity; such advice provides for reorienting the X-ray examination room, in Russia, for gadolinium screens and modern radiography films.
Kikuta, Junichi; Ishii, Masaru
Bone is continually remodeled by bone-resorbing osteoclasts and bone-forming osteoblasts. Although it has long been believed that bone homeostasis is tightly regulated by communication between osteoclasts and osteoblasts, the fundamental process and dynamics have remained elusive. We originally established an advanced imaging system to visualize living bone tissues using intravital two-photon microscopy. By means of this system, we revealed the in vivo behavior of bone-resorbing osteoclasts and bone-forming osteoblasts in bone tissues. This approach facilitates investigation of cellular dynamics in the pathogenesis of musculoskeletal disorders, and would thus be useful for evaluating the efficacy of novel therapeutic agents.
Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection
Denison, Rachel N.; Silver, Michael A.
2014-01-01
During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685
Shinohara, Gen; Morita, Kiyozo; Hoshino, Masato; Ko, Yoshihiro; Tsukube, Takuro; Kaneko, Yukihiro; Morishita, Hiroyuki; Oshima, Yoshihiro; Matsuhisa, Hironori; Iwaki, Ryuma; Takahashi, Masashi; Matsuyama, Takaaki; Hashimoto, Kazuhiro; Yagi, Naoto
2016-11-01
The feasibility of synchrotron radiation-based phase-contrast computed tomography (PCCT) for visualization of the atrioventricular (AV) conduction axis in human whole heart specimens was tested using four postmortem structurally normal newborn hearts obtained at autopsy. A PCCT imaging system at the beamline BL20B2 in a SPring-8 synchrotron radiation facility was used. The PCCT imaging of the conduction system was performed with "virtual" slicing of the three-dimensional reconstructed images. For histological verification, specimens were cut into planes similar to the PCCT images, then cut into 5-μm serial sections and stained with Masson's trichrome. In PCCT images of all four of the whole hearts of newborns, the AV conduction axis was distinguished as a low-density structure, which was serially traceable from the compact node to the penetrating bundle within the central fibrous body, and to the branching bundle into the left and right bundle branches. This was verified by histological serial sectioning. This is the first demonstration that visualization of the AV conduction axis within human whole heart specimens is feasible with PCCT. © The Author(s) 2016.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Night vision: requirements and possible roadmap for FIR and NIR systems
NASA Astrophysics Data System (ADS)
Källhammer, Jan-Erik
2006-04-01
A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.
Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi
2015-02-01
Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.
The Tactile Vision Substitution System: Applications in Education and Employment
ERIC Educational Resources Information Center
Scadden, Lawrence A.
1974-01-01
The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)
Chemiluminescent imaging of transpired ethanol from the palm for evaluation of alcohol metabolism.
Arakawa, Takahiro; Kita, Kazutaka; Wang, Xin; Miyajima, Kumiko; Toma, Koji; Mitsubayashi, Kohji
2015-05-15
A 2-dimensional imaging system was constructed and applied in measurements of gaseous ethanol emissions from the human palm. This imaging system measures gaseous ethanol concentrations as intensities of chemiluminescence by luminol reaction induced by alcohol oxidase and luminol-hydrogen peroxide-horseradish peroxidase system. Conversions of ethanol distributions and concentrations to 2-dimensional chemiluminescence were conducted on an enzyme-immobilized mesh substrate in a dark box, which contained a luminol solution. In order to visualize ethanol emissions from human palm skin, we developed highly sensitive and selective imaging system for transpired gaseous ethanol at sub ppm-levels. Thus, a mixture of a high-purity luminol solution of luminol sodium salt HG solution instead of standard luminol solution and an enhancer of eosin Y solution was adapted to refine the chemiluminescent intensity of the imaging system, and improved the detection limit to 3 ppm gaseous ethanol. The highly sensitive imaging allows us to successfully visualize the emissions dynamics of transdermal gaseous ethanol. The intensity of each site on the palm shows the reflection of ethanol concentrations distributions corresponding to the amount of alcohol metabolized upon consumption. This imaging system is significant and useful for the assessment of ethanol measurement of the palmar skin. Copyright © 2014 Elsevier B.V. All rights reserved.
A Practical and Portable Solids-State Electronic Terahertz Imaging System
Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang
2016-01-01
A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791
How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.
Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung
2018-05-10
The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.
Elementary Teachers' Selection and Use of Visual Models
NASA Astrophysics Data System (ADS)
Lee, Tammy D.; Gail Jones, M.
2018-02-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.
Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)
2001-01-01
The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.
Strathearn, Lane; Kim, Sohye; Bastian, D Anthony; Jung, Jennifer; Iyengar, Udita; Martinez, Sheila; Goin-Kochel, Robin P; Fonagy, Peter
2018-05-01
Several studies have suggested that the neuropeptide oxytocin may enhance aspects of social communication in autism. Little is known, however, about its effects on nonsocial manifestations, such as restricted interests and repetitive behaviors. In the empathizing-systemizing theory of autism, social deficits are described along the continuum of empathizing ability, whereas nonsocial aspects are characterized in terms of an increased preference for patterned or rule-based systems, called systemizing. We therefore developed an automated eye-tracking task to test whether children and adolescents with autism spectrum disorder (ASD) compared to matched controls display a visual preference for more highly organized and structured (systemized) real-life images. Then, as part of a randomized, double-blind, placebo-controlled crossover study, we examined the effect of intranasal oxytocin on systemizing preferences in 16 male children with ASD, compared with 16 matched controls. Participants viewed 14 slides, each containing four related pictures (e.g., of people, animals, scenes, or objects) that differed primarily on the degree of systemizing. Visual systemizing preference was defined in terms of the fixation time and count for each image. Unlike control subjects who showed no gaze preference, individuals with ASD preferred to fixate on more highly systemized pictures. Intranasal oxytocin eliminated this preference in ASD participants, who now showed a similar response to control subjects on placebo. In contrast, control participants increased their visual preference for more systemized images after receiving oxytocin versus placebo. These results suggest that, in addition to its effects on social communication, oxytocin may play a role in some of the nonsocial manifestations of autism.
The Effects of Explicit Visual Cues in Reading Biological Diagrams
ERIC Educational Resources Information Center
Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua
2017-01-01
Drawing on cognitive theories, this study intends to investigate the effects of explicit visual cues which have been proposed as a critical factor in facilitating understanding of biological images. Three diagrams from Taiwanese textbooks with implicit visual cues, involving the concepts of biological classification systems, fish taxonomy, and…
Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba
2014-10-01
In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian
2017-05-01
A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
NASA Astrophysics Data System (ADS)
Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.
2017-05-01
Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.
Neuronal Mechanism for Compensation of Longitudinal Chromatic Aberration-Derived Algorithm.
Barkan, Yuval; Spitzer, Hedva
2018-01-01
The human visual system faces many challenges, among them the need to overcome the imperfections of its optics, which degrade the retinal image. One of the most dominant limitations is longitudinal chromatic aberration (LCA), which causes short wavelengths (blue light) to be focused in front of the retina with consequent blurring of the retinal chromatic image. The perceived visual appearance, however, does not display such chromatic distortions. The intriguing question, therefore, is how the perceived visual appearance of a sharp and clear chromatic image is achieved despite the imperfections of the ocular optics. To address this issue, we propose a neural mechanism and computational model, based on the unique properties of the S -cone pathway. The model suggests that the visual system overcomes LCA through two known properties of the S channel: (1) omitting the contribution of the S channel from the high-spatial resolution pathway (utilizing only the L and M channels). (b) Having large and coextensive receptive fields that correspond to the small bistratified cells. Here, we use computational simulations of our model on real images to show how integrating these two basic principles can provide a significant compensation for LCA. Further support for the proposed neuronal mechanism is given by the ability of the model to predict an enigmatic visual phenomenon of large color shifts as part of the assimilation effect.
Neuronal Mechanism for Compensation of Longitudinal Chromatic Aberration-Derived Algorithm
Barkan, Yuval; Spitzer, Hedva
2018-01-01
The human visual system faces many challenges, among them the need to overcome the imperfections of its optics, which degrade the retinal image. One of the most dominant limitations is longitudinal chromatic aberration (LCA), which causes short wavelengths (blue light) to be focused in front of the retina with consequent blurring of the retinal chromatic image. The perceived visual appearance, however, does not display such chromatic distortions. The intriguing question, therefore, is how the perceived visual appearance of a sharp and clear chromatic image is achieved despite the imperfections of the ocular optics. To address this issue, we propose a neural mechanism and computational model, based on the unique properties of the S-cone pathway. The model suggests that the visual system overcomes LCA through two known properties of the S channel: (1) omitting the contribution of the S channel from the high-spatial resolution pathway (utilizing only the L and M channels). (b) Having large and coextensive receptive fields that correspond to the small bistratified cells. Here, we use computational simulations of our model on real images to show how integrating these two basic principles can provide a significant compensation for LCA. Further support for the proposed neuronal mechanism is given by the ability of the model to predict an enigmatic visual phenomenon of large color shifts as part of the assimilation effect. PMID:29527525
The monocular visual imaging technology model applied in the airport surface surveillance
NASA Astrophysics Data System (ADS)
Qin, Zhe; Wang, Jian; Huang, Chao
2013-08-01
At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.
Connecting Swath Satellite Data With Imagery in Mapping Applications
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.
2016-12-01
Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.
Retrieving the unretrievable in electronic imaging systems: emotions, themes, and stories
NASA Astrophysics Data System (ADS)
Joergensen, Corinne
1999-05-01
New paradigms such as 'affective computing' and user-based research are extending the realm of facets traditionally addressed in IR systems. This paper builds on previous research reported to the electronic imaging community concerning the need to provide access to more abstract attributes of images than those currently amenable to a variety of content-based and text-based indexing techniques. Empirical research suggest that, for visual materials, in addition to standard bibliographic data and broad subject, and in addition to such visually perceptual attributes such as color, texture, shape, and position or focal point, additional access points such as themes, abstract concepts, emotions, stories, and 'people-related' information such as social status would be useful in image retrieval. More recent research demonstrates that similar results are also obtained with 'fine arts' images, which generally have no access provided for these types of attributes. Current efforts to match image attributes as revealed in empirical research with those addressed both in current textural and content-based indexing systems are discussed, as well as the need for new representations for image attributes and for collaboration among diverse communities of researchers.
Schut, Martijn J; Van der Stoep, Nathan; Postma, Albert; Van der Stigchel, Stefan
2017-06-01
To facilitate visual continuity across eye movements, the visual system must presaccadically acquire information about the future foveal image. Previous studies have indicated that visual working memory (VWM) affects saccade execution. However, the reverse relation, the effect of saccade execution on VWM load is less clear. To investigate the causal link between saccade execution and VWM, we combined a VWM task and a saccade task. Participants were instructed to remember one, two, or three shapes and performed either a No Saccade-, a Single Saccade- or a Dual (corrective) Saccade-task. The results indicate that items stored in VWM are reported less accurately if a single saccade-or a dual saccade-task is performed next to retaining items in VWM. Importantly, the loss of response accuracy for items retained in VWM by performing a saccade was similar to committing an extra item to VWM. In a second experiment, we observed no cost of executing a saccade for auditory working memory performance, indicating that executing a saccade exclusively taxes the VWM system. Our results suggest that the visual system presaccadically stores the upcoming retinal image, which has a similar VWM load as committing one extra item to memory and interferes with stored VWM content. After the saccade, the visual system can retrieve this item from VWM to evaluate saccade accuracy. Our results support the idea that VWM is a system which is directly linked to saccade execution and promotes visual continuity across saccades.
fMRI mapping of the visual system in the mouse brain with interleaved snapshot GE-EPI.
Niranjan, Arun; Christie, Isabel N; Solomon, Samuel G; Wells, Jack A; Lythgoe, Mark F
2016-10-01
The use of functional magnetic resonance imaging (fMRI) in mice is increasingly prevalent, providing a means to non-invasively characterise functional abnormalities associated with genetic models of human diseases. The predominant stimulus used in task-based fMRI in the mouse is electrical stimulation of the paw. Task-based fMRI in mice using visual stimuli remains underexplored, despite visual stimuli being common in human fMRI studies. In this study, we map the mouse brain visual system with BOLD measurements at 9.4T using flashing light stimuli with medetomidine anaesthesia. BOLD responses were observed in the lateral geniculate nucleus, the superior colliculus and the primary visual area of the cortex, and were modulated by the flashing frequency, diffuse vs focussed light and stimulus context. Negative BOLD responses were measured in the visual cortex at 10Hz flashing frequency; but turned positive below 5Hz. In addition, the use of interleaved snapshot GE-EPI improved fMRI image quality without diminishing the temporal contrast-noise-ratio. Taken together, this work demonstrates a novel methodological protocol in which the mouse brain visual system can be non-invasively investigated using BOLD fMRI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
NASA Astrophysics Data System (ADS)
Jang, Sun-Joo; Park, Taejin; Shin, Inho; Park, Hyun Sang; Shin, Paul; Oh, Wang-Yuhl
2016-02-01
Optical coherence tomography (OCT) is a useful imaging method for in vivo tissue imaging with deep penetration and high spatial resolution. However, imaging of the beating mouse heart is still challenging due to limited temporal resolution or penetration depth. Here, we demonstrate a multifunctional OCT system for a beating mouse heart, providing various types of visual information about heart pathophysiology with high spatiotemporal resolution and deep tissue imaging. Angiographic imaging and polarization-sensitive (PS) imaging were implemented with the electrocardiogram (ECG)-triggered beam scanning scheme on the high-speed OCT platform (A-line rate: 240 kHz). Depth-resolved local birefringence and the local orientation of the mouse myocardial fiber were visualized from the PS-OCT. ECG-triggered angiographic OCT (AOCT) with the custom-built motion stabilization imaging window provided myocardial vasculature of a beating mouse heart. Mice underwent coronary artery ligation to derive myocardial infarction (MI) and were imaged with the multifunctional OCT system at multiple time points. AOCT and PS-OCT visualize change of functionality of coronary vessels and myocardium respectively at different phases (acute and chronic) of MI in an ischemic mouse heart. Taken together, the integrated imaging of PS-OCT and AOCT would play an important role in study of MI providing multi-dimensional information of the ischemic mouse heart in vivo.
Multispectral photoacoustic tomography for detection of small tumors inside biological tissues
NASA Astrophysics Data System (ADS)
Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya
2018-02-01
Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
NASA Astrophysics Data System (ADS)
Akimoto, Makio; Chen, Yu; Miyazaki, Michio; Yamashita, Toyonobu; Miyakawa, Michio; Hata, Mieko
The skin is unique as an organ that is highly accessible to direct visual inspection with light. Visual inspection of cutaneous morphology is the mainstay of clinical dermatology, but relies heavily on subjective assessment by the skilled dermatologists. We present an imaging colorimeter of non-contact skin color measuring system and some experimented results using such instrument. The system is comprised by a video camera, light source, a real-time image processing board, magneto optics disk and personal computer which controls the entire system. The CIE-L*a*b* uniform color space is used. This system is used for monitoring of some clinical diagnosis. The instrument is non-contact, easy to operate, and has a high precision unlike the conventional colorimeters. This instrument is useful for clinical diagnoses, monitoring and evaluating the effectiveness of treatment.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain
Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.
2016-01-01
Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754
[Comparison of noise characteristics of direct and indirect conversion flat panel detectors].
Murai, Masami; Kishimoto, Kenji; Tanaka, Katsuhisa; Oota, Kenji; Ienaga, Akinori
2010-11-20
Flat-panel detector (FPD) digital radiography systems have direct and indirect conversion systems, and the 2 conversion systems provide different imaging performances. We measured some imaging performances [input-output characteristic, presampled modulation transfer function (presampled MTF), noise power spectrum (NPS)] of direct and indirect FPD systems. Moreover, some image samples of the NPSs were visually evaluated by the pair comparison method. As a result, the presampled MTF of the direct FPD system was substantially higher than that of the indirect FPD system. The NPS of the direct FPD system had a high value for all spatial frequencies. In contrast, the NPS of the indirect FPD system had a lower value as the frequency became higher. The results of visual evaluations showed the same tendency as that found for NPSs. We elucidated the cause of the difference in NPSs in a simulation study, and we determined that the cause of the difference in the noise components of the direct and indirect FPD systems was closely related to the presampled MTF.
BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju
2009-01-01
We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.
Zhou, Zhuhuang; Wu, Shuicai; Lin, Man-Yen; Fang, Jui; Liu, Hao-Li; Tsui, Po-Hsiang
2018-05-01
In this study, the window-modulated compounding (WMC) technique was integrated into three-dimensional (3D) ultrasound Nakagami imaging for improving the spatial visualization of backscatter statistics. A 3D WMC Nakagami image was produced by summing and averaging a number of 3D Nakagami images (number of frames denoted as N) formed using sliding cubes with varying side lengths ranging from 1 to N times the transducer pulse. To evaluate the performance of the proposed 3D WMC Nakagami imaging method, agar phantoms with scatterer concentrations ranging from 2 to 64 scatterers/mm 3 were made, and six stages of fatty liver (zero, one, two, four, six, and eight weeks) were induced in rats by methionine-choline-deficient diets (three rats for each stage, total n = 18). A mechanical scanning system with a 5-MHz focused single-element transducer was used for ultrasound radiofrequency data acquisition. The experimental results showed that 3D WMC Nakagami imaging was able to characterize different scatterer concentrations. Backscatter statistics were visualized with various numbers of frames; N = 5 reduced the estimation error of 3D WMC Nakagami imaging in visualizing the backscatter statistics. Compared with conventional 3D Nakagami imaging, 3D WMC Nakagami imaging improved the image smoothness without significant image resolution degradation, and it can thus be used for describing different stages of fatty liver in rats.
Visualization of fluid turbulence and acoustic cavitation during phacoemulsification.
Tognetto, Daniele; Sanguinetti, Giorgia; Sirotti, Paolo; Brezar, Edoardo; Ravalico, Giuseppe
2005-02-01
To describe a technique for visualizing fluid turbulence and cavitational energy created by ultrasonic phaco tips. University Eye Clinic of Trieste, Trieste, Italy. Generation of cavitational energy by the phaco tip was visualized using an optical test bench comprising several components. The technique uses a telescope system to expand a laser light source into a coherent, collimated beam of light with a diameter of approximately 50.0 mm. The expanded laser beam shines on the test tube containing the tip activated in a medium of water or ophthalmic viscosurgical device (OVD). Two precision optical collimators complete the optical test bench and form the system used to focus data onto a charge-coupled device television camera connected to a recorder. Images of irrigation, irrigation combined with aspiration, irrigation/aspiration, and phacosonication were obtained with the tip immersed in a tube containing water or OVD. Optical image processing enabled acoustic cavitation to be visualized during phacosonication. The system is a possible means of evaluating a single phaco apparatus power setting and comparing phaco machines and techniques.
VirGO: A Visual Browser for the ESO Science Archive Facility
NASA Astrophysics Data System (ADS)
Chéreau, Fabien
2012-04-01
VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system.
Scrambling for anonymous visual communications
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ebrahimi, Touradj
2005-08-01
In this paper, we present a system for anonymous visual communications. Target application is an anonymous video chat. The system is identifying faces in the video sequence by means of face detection or skin detection. The corresponding regions are subsequently scrambled. We investigate several approaches for scrambling, either in the image-domain or in the transform-domain. Experiment results show the effectiveness of the proposed system.
NASA Astrophysics Data System (ADS)
Stewart, P. A. E.
1987-05-01
Present and projected applications of penetrating radiation techniques to gas turbine research and development are considered. Approaches discussed include the visualization and measurement of metal component movement using high energy X-rays, the measurement of metal temperatures using epithermal neutrons, the measurement of metal stresses using thermal neutron diffraction, and the visualization and measurement of oil and fuel systems using either cold neutron radiography or emitting isotope tomography. By selecting the radiation appropriate to the problem, the desired data can be probed for and obtained through imaging or signal acquisition, and the necessary information can then be extracted with digital image processing or knowledge based image manipulation and pattern recognition.
Improvements and Additions to NASA Near Real-Time Earth Imagery
NASA Technical Reports Server (NTRS)
Cechini, Matthew; Boller, Ryan; Baynes, Kathleen; Schmaltz, Jeffrey; DeLuca, Alexandar; King, Jerome; Thompson, Charles; Roberts, Joe; Rodriguez, Joshua; Gunnoe, Taylor;
2016-01-01
For many years, the NASA Global Imagery Browse Services (GIBS) has worked closely with the Land, Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) system to provide near real-time imagery visualizations of AIRS (Atmospheric Infrared Sounder), MLS (Microwave Limb Sounder), MODIS (Moderate Resolution Imaging Spectrometer), OMI (Ozone Monitoring Instrument), and recently VIIRS (Visible Infrared Imaging Radiometer Suite) science parameters. These visualizations are readily available through standard web services and the NASA Worldview client. Access to near real-time imagery provides a critical capability to GIBS and Worldview users. GIBS continues to focus on improving its commitment to providing near real-time imagery for end-user applications. The focus of this presentation will be the following completed or planned GIBS system and imagery enhancements relating to near real-time imagery visualization.
Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, Lesley A., E-mail: Lesley.a.jarvis@hitchcock.org; Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire; Zhang, Rongxiao
Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans,more » mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.« less
NASA Astrophysics Data System (ADS)
Herold, Julia; Abouna, Sylvie; Zhou, Luxian; Pelengaris, Stella; Epstein, David B. A.; Khan, Michael; Nattkemper, Tim W.
2009-02-01
In the last years, bioimaging has turned from qualitative measurements towards a high-throughput and highcontent modality, providing multiple variables for each biological sample analyzed. We present a system which combines machine learning based semantic image annotation and visual data mining to analyze such new multivariate bioimage data. Machine learning is employed for automatic semantic annotation of regions of interest. The annotation is the prerequisite for a biological object-oriented exploration of the feature space derived from the image variables. With the aid of visual data mining, the obtained data can be explored simultaneously in the image as well as in the feature domain. Especially when little is known of the underlying data, for example in the case of exploring the effects of a drug treatment, visual data mining can greatly aid the process of data evaluation. We demonstrate how our system is used for image evaluation to obtain information relevant to diabetes study and screening of new anti-diabetes treatments. Cells of the Islet of Langerhans and whole pancreas in pancreas tissue samples are annotated and object specific molecular features are extracted from aligned multichannel fluorescence images. These are interactively evaluated for cell type classification in order to determine the cell number and mass. Only few parameters need to be specified which makes it usable also for non computer experts and allows for high-throughput analysis.
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Bauer, Corinna M.; Heidary, Gena; Koo, Bang-Bon; Killiany, Ronald J.; Bex, Peter; Merabet, Lotfi B.
2014-01-01
Cortical (cerebral) visual impairment (CVI) is characterized by visual dysfunction associated with damage to the optic radiations and/or visual cortex. Typically it results from pre- or perinatal hypoxic damage to postchiasmal visual structures and pathways. The neuroanatomical basis of this condition remains poorly understood, particularly with regard to how the resulting maldevelopment of visual processing pathways relates to observations in the clinical setting. We report our investigation of 2 young adults diagnosed with CVI and visual dysfunction characterized by difficulties related to visually guided attention and visuospatial processing. Using high-angular-resolution diffusion imaging (HARDI), we characterized and compared their individual white matter projections of the extrageniculo-striate visual system with a normal-sighted control. Compared to a sighted control, both CVI cases revealed a striking reduction in association fibers, including the inferior frontal-occipital fasciculus as well as superior and inferior longitudinal fasciculi. This reduction in fibers associated with the major pathways implicated in visual processing may provide a neuroanatomical basis for the visual dysfunctions observed in these patients. PMID:25087644
RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.
Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang
2014-01-01
In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
The Impact of New Electronic Imaging Systems on U.S. Air Force Visual Information Professionals.
1993-06-01
modernizing the functions left in their control. This process started by converting combat camera assets from 16mm film to Betacam "camcorder’ systems. Combat...upgraded to computer-controlled editing with 1-inch helical machines or component-video Betacam equipment. For the base visual information centers, new
ERIC Educational Resources Information Center
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
Deliolanis, Nikolaos C; Ale, Angelique; Morscher, Stefan; Burton, Neal C; Schaefer, Karin; Radrich, Karin; Razansky, Daniel; Ntziachristos, Vasilis
2014-10-01
A primary enabling feature of near-infrared fluorescent proteins (FPs) and fluorescent probes is the ability to visualize deeper in tissues than in the visible. The purpose of this work is to find which is the optimal visualization method that can exploit the advantages of this novel class of FPs in full-scale pre-clinical molecular imaging studies. Nude mice were stereotactically implanted with near-infrared FP expressing glioma cells to from brain tumors. The feasibility and performance metrics of FPs were compared between planar epi-illumination and trans-illumination fluorescence imaging, as well as to hybrid Fluorescence Molecular Tomography (FMT) system combined with X-ray CT and Multispectral Optoacoustic (or Photoacoustic) Tomography (MSOT). It is shown that deep-seated glioma brain tumors are possible to visualize both with fluorescence and optoacoustic imaging. Fluorescence imaging is straightforward and has good sensitivity; however, it lacks resolution. FMT-XCT can provide an improved rough resolution of ∼1 mm in deep tissue, while MSOT achieves 0.1 mm resolution in deep tissue and has comparable sensitivity. We show imaging capacity that can shift the visualization paradigm in biological discovery. The results are relevant not only to reporter gene imaging, but stand as cross-platform comparison for all methods imaging near infrared fluorescent contrast agents.
Toward image guided robotic surgery: system validation.
Herrell, Stanley D; Kwartowitz, David Morgan; Milhoua, Paul M; Galloway, Robert L
2009-02-01
Navigation for current robotic assisted surgical techniques is primarily accomplished through a stereo pair of laparoscopic camera images. These images provide standard optical visualization of the surface but provide no subsurface information. Image guidance methods allow the visualization of subsurface information to determine the current position in relationship to that of tracked tools. A robotic image guided surgical system was designed and implemented based on our previous laboratory studies. A series of experiments using tissue mimicking phantoms with injected target lesions was performed. The surgeon was asked to resect "tumor" tissue with and without the augmentation of image guidance using the da Vinci robotic surgical system. Resections were performed and compared to an ideal resection based on the radius of the tumor measured from preoperative computerized tomography. A quantity called the resection ratio, that is the ratio of resected tissue compared to the ideal resection, was calculated for each of 13 trials and compared. The mean +/- SD resection ratio of procedures augmented with image guidance was smaller than that of procedures without image guidance (3.26 +/- 1.38 vs 9.01 +/- 1.81, p <0.01). Additionally, procedures using image guidance were shorter (average 8 vs 13 minutes). It was demonstrated that there is a benefit from the augmentation of laparoscopic video with updated preoperative images. Incorporating our image guided system into the da Vinci robotic system improved overall tissue resection, as measured by our metric. Adding image guidance to the da Vinci robotic surgery system may result in the potential for improvements such as the decreased removal of benign tissue while maintaining an appropriate surgical margin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schafer, S.; Nithiananthan, S.; Mirota, D. J.
Purpose: A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for task-specific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. Methods: Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic andmore » lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and soft-tissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. Results: Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) - each at (0.3 x 0.3 x 0.9 mm{sup 3}) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3 x 0.3 x 1.5 mm{sup 3}) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution ({approx}2 x higher at the entrance side than at isocenter, and {approx}3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from {approx}21 {mu}Sv/mGy on average at tableside to {approx}0.1 {mu}Sv/mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting {approx}zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was {approx}11.5 mGy for CBCT-guided thoracic vertebroplasty and {approx}23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to {approx}1 min of fluoroscopy time ({approx}40-60 {mu}Sv), compared to 5-11 min for the conventional approach. Conclusions: Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific protocols providing an important basis for minimizing radiation dose, while maintaining image quality sufficient for surgical guidance. Images demonstrate a significant advance in spatial resolution and soft-tissue visibility, and CBCT guidance offers the potential to reduce fluoroscopy reliance, reducing cumulative dose to patient and staff. Integration with a surgical guidance system demonstrates precise tracking and visualization in up-to-date images (alleviating reliance on preoperative images only), including detection of errors or suboptimal surgical outcomes in the operating room.« less
Using digital photo technology to improve visualization of gastric lumen CT images
NASA Astrophysics Data System (ADS)
Pyrgioti, M.; Kyriakidis, A.; Chrysostomou, S.; Panaritis, V.
2006-12-01
In order to evaluate the gastric lumen CT images better, a new method is being applied to images using an Image Processing software. During a 12-month period, 69 patients with various gastric symptoms and 20 normal (as far as it concerns the upper gastrointestinal system) volunteers underwent computed tomography of the upper gastrointestinal system. Just before the examination the patients and the normal volunteers underwent preparation with 40 ml soda water and 10 ml gastrografin. All the CT images were digitized with an Olympus 3.2 Mpixel digital camera and further processed with an Image Processing software. The administration per os of gastrografin and soda water resulted in the distension of the stomach and consequently better visualization of all the anatomic parts. By using an Image Processing software in a PC, all the pathological and normal images of the stomach were better diagnostically estimated. We believe that the photo digital technology improves the diagnostic capacity not only of the CT image but also in MRI and probably many other imaging methods.
Technical parameters for specifying imagery requirements
NASA Technical Reports Server (NTRS)
Coan, Paul P.; Dunnette, Sheri J.
1994-01-01
Providing visual information acquired from remote events to various operators, researchers, and practitioners has become progressively more important as the application of special skills in alien or hazardous situations increases. To provide an understanding of the technical parameters required to specify imagery, we have identified, defined, and discussed seven salient characteristics of images: spatial resolution, linearity, luminance resolution, spectral discrimination, temporal discrimination, edge definition, and signal-to-noise ratio. We then describe a generalizing imaging system and identified how various parts of the system affect the image data. To emphasize the different applications of imagery, we have constrasted the common television system with the significant parameters of a televisual imaging system for technical applications. Finally, we have established a method by which the required visual information can be specified by describing certain technical parameters which are directly related to the information content of the imagery. This method requires the user to complete a form listing all pertinent data requirements for the imagery.
ERIC Educational Resources Information Center
Gopal, Venkatesh; Klosowiak, Julian L.; Jaeger, Robert; Selimkhanov, Timur; Hartmann, Mitra J. Z.
2008-01-01
We describe the construction and operation of three low-cost schlieren imaging systems that can be fabricated using surplus optics and 80/20, an aluminium extrusion based construction system. Each system has a different optical configuration. The low cost and ease of construction makes these systems highly suitable for high-school and…
Image Retrieval by Color Semantics with Incomplete Knowledge.
ERIC Educational Resources Information Center
Corridoni, Jacopo M.; Del Bimbo, Alberto; Vicario, Enrico
1998-01-01
Presents a system which supports image retrieval by high-level chromatic contents, the sensations that color accordances generate on the observer. Surveys Itten's theory of color semantics and discusses image description and query specification. Presents examples of visual querying. (AEF)
Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.
2016-01-01
ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452
Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M
2016-01-01
Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.
Visualization and Image Analysis of Yeast Cells.
Bagley, Steve
2016-01-01
When converting real-life data via visualization to numbers and then onto statistics the whole system needs to be considered so that conversion from the analogue to the digital is accurate and repeatable. Here we describe the points to consider when approaching yeast cell analysis visualization, processing, and analysis of a population by screening techniques.
Door and window image-based measurement using a mobile device
NASA Astrophysics Data System (ADS)
Ma, Guangyao; Janakaraj, Manishankar; Agam, Gady
2015-03-01
We present a system for door and window image-based measurement using an Android mobile device. In this system a user takes an image of a door or window that needs to be measured and using interaction measures specific dimensions of the object. The existing object is removed from the image and a 3D model of a replacement is rendered onto the image. The visualization provides a 3D model with which the user can interact. When tested on a mobile Android platform with an 8MP camera we obtain an average measurement error of roughly 0.5%. This error rate is stable across a range of view angles, distances from the object, and image resolutions. The main advantages of our mobile device application for image measurement include measuring objects for which physical access is not readily available, documenting in a precise manner the locations in the scene where the measurements were taken, and visualizing a new object with custom selections inside the original view.
Yamasaki, Takao; Maekawa, Toshihiko; Fujita, Takako; Tobimatsu, Shozo
2017-01-01
Individuals with autism spectrum disorder (ASD) show superior performance in processing fine details; however, they often exhibit impairments of gestalt face, global motion perception, and visual attention as well as core social deficits. Increasing evidence has suggested that social deficits in ASD arise from abnormal functional and structural connectivities between and within distributed cortical networks that are recruited during social information processing. Because the human visual system is characterized by a set of parallel, hierarchical, multistage network systems, we hypothesized that the altered connectivity of visual networks contributes to social cognition impairment in ASD. In the present review, we focused on studies of altered connectivity of visual and attention networks in ASD using visual evoked potentials (VEPs), event-related potentials (ERPs), and diffusion tensor imaging (DTI). A series of VEP, ERP, and DTI studies conducted in our laboratory have demonstrated complex alterations (impairment and enhancement) of visual and attention networks in ASD. Recent data have suggested that the atypical visual perception observed in ASD is caused by altered connectivity within parallel visual pathways and attention networks, thereby contributing to the impaired social communication observed in ASD. Therefore, we conclude that the underlying pathophysiological mechanism of ASD constitutes a “connectopathy.” PMID:29170625
High-frequency Ultrasound Imaging of Mouse Cervical Lymph Nodes.
Walk, Elyse L; McLaughlin, Sarah L; Weed, Scott A
2015-07-25
High-frequency ultrasound (HFUS) is widely employed as a non-invasive method for imaging internal anatomic structures in experimental small animal systems. HFUS has the ability to detect structures as small as 30 µm, a property that has been utilized for visualizing superficial lymph nodes in rodents in brightness (B)-mode. Combining power Doppler with B-mode imaging allows for measuring circulatory blood flow within lymph nodes and other organs. While HFUS has been utilized for lymph node imaging in a number of mouse model systems, a detailed protocol describing HFUS imaging and characterization of the cervical lymph nodes in mice has not been reported. Here, we show that HFUS can be adapted to detect and characterize cervical lymph nodes in mice. Combined B-mode and power Doppler imaging can be used to detect increases in blood flow in immunologically-enlarged cervical nodes. We also describe the use of B-mode imaging to conduct fine needle biopsies of cervical lymph nodes to retrieve lymph tissue for histological analysis. Finally, software-aided steps are described to calculate changes in lymph node volume and to visualize changes in lymph node morphology following image reconstruction. The ability to visually monitor changes in cervical lymph node biology over time provides a simple and powerful technique for the non-invasive monitoring of cervical lymph node alterations in preclinical mouse models of oral cavity disease.
ERIC Educational Resources Information Center
Eryilmaz, Huseyin
2010-01-01
Today, photography and visual arts are very important in our modern life. Especially for the mass communication, the visual images and visual arts have very big importance. In modern societies, people must have knowledge about the visual things, such as photographs, cartoons, drawings, typography, etc. Briefly, the people need education on visual…
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Creating a classification of image types in the medical literature for visual categorization
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Demner-Fushman, Dina; Antani, Sameer
2012-02-01
Content-based image retrieval (CBIR) from specialized collections has often been proposed for use in such areas as diagnostic aid, clinical decision support, and teaching. The visual retrieval from broad image collections such as teaching files, the medical literature or web images, by contrast, has not yet reached a high maturity level compared to textual information retrieval. Visual image classification into a relatively small number of classes (20-100) on the other hand, has shown to deliver good results in several benchmarks. It is, however, currently underused as a basic technology for retrieval tasks, for example, to limit the search space. Most classification schemes for medical images are focused on specific areas and consider mainly the medical image types (modalities), imaged anatomy, and view, and merge them into a single descriptor or classification hierarchy. Furthermore, they often ignore other important image types such as biological images, statistical figures, flowcharts, and diagrams that frequently occur in the biomedical literature. Most of the current classifications have also been created for radiology images, which are not the only types to be taken into account. With Open Access becoming increasingly widespread particularly in medicine, images from the biomedical literature are more easily available for use. Visual information from these images and knowledge that an image is of a specific type or medical modality could enrich retrieval. This enrichment is hampered by the lack of a commonly agreed image classification scheme. This paper presents a hierarchy for classification of biomedical illustrations with the goal of using it for visual classification and thus as a basis for retrieval. The proposed hierarchy is based on relevant parts of existing terminologies, such as the IRMA-code (Image Retrieval in Medical Applications), ad hoc classifications and hierarchies used in imageCLEF (Image retrieval task at the Cross-Language Evaluation Forum) and NLM's (National Library of Medicine) OpenI. Furtheron, mappings to NLM's MeSH (Medical Subject Headings), RSNA's RadLex (Radiological Society of North America, Radiology Lexicon), and the IRMA code are also attempted for relevant image types. Advantages derived from such hierarchical classification for medical image retrieval are being evaluated through benchmarks such as imageCLEF, and R&D systems such as NLM's OpenI. The goal is to extend this hierarchy progressively and (through adding image types occurring in the biomedical literature) to have a terminology for visual image classification based on image types distinguishable by visual means and occurring in the medical open access literature.
Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A
2012-09-01
Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.
Temporal and spatial localization of prediction-error signals in the visual brain.
Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W
2017-04-01
It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
Recent advances in near-infrared fluorescence-guided imaging surgery using indocyanine green.
Namikawa, Tsutomu; Sato, Takayuki; Hanazaki, Kazuhiro
2015-12-01
Near-infrared (NIR) fluorescence imaging has better tissue penetration, allowing for the effective rejection of excitation light and detection deep inside organs. Indocyanine green (ICG) generates NIR fluorescence after illumination by an NIR ray, enabling real-time intraoperative visualization of superficial lymphatic channels and vessels transcutaneously. The HyperEye Medical System (HEMS) can simultaneously detect NIR rays under room light to provide color imaging, which enables visualization under bright light. Thus, NIR fluorescence imaging using ICG can provide for excellent diagnostic accuracy in detecting sentinel lymph nodes in cancer and microvascular circulation in various ischemic diseases, to assist us with intraoperative decision making. Including HEMS in this system could further improve the sentinel lymph node mapping and intraoperative identification of blood supply in reconstructive organs and ischemic diseases, making it more attractive than conventional imaging. Moreover, the development of new laparoscopic imaging systems equipped with NIR will allow fluorescence-guided surgery in a minimally invasive setting. Future directions, including the conjugation of NIR fluorophores to target specific cancer markers might be realistic technology with diagnostic and therapeutic benefits.
Optics of wide-angle panoramic viewing system-assisted vitreous surgery.
Chalam, Kakarla V; Shah, Vinay A
2004-01-01
The purpose of the article is to describe the optics of the contact wide-angle lens system with stereo-reinverter for vitreous surgery. A panoramic viewing system is made up of two components; an indirect ophthalmoscopy lens system for fundus image viewing, which is placed on the patient's cornea as a contact lens, and a separate removable prism system for reinversion of the image mounted on the microscope above the zooming system. The system provides a 104 degrees field of view in a phakic emmetropic eye with minification, which can be magnified by the operating microscope. It permits a binocular stereoptic view even through a small pupil (3 mm) or larger. In an air-filled phakic eye, field of view increases to approximately 130 degrees. The obtained image of the patient's fundus is reinverted to form true, erect, stereoscopic image by the reinversion system. In conclusion, this system permits wide-angle panoramic view of the surgical field. The contact lens neutralizes the optical irregularities of the corneal surface and allows improved visualization in eyes with irregular astigmatism induced by corneal scars. Excellent visualization is achieved in complex clinical situations such as miotic pupils, lenticular opacities, and in air-filled phakic eyes.
Dual function seal: visualized digital signature for electronic medical record systems.
Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang
2012-10-01
Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.
Real-time simulation and visualization of volumetric brain deformation for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Ferrant, Matthieu; Nabavi, Arya; Macq, Benoit M. M.; Kikinis, Ron; Warfield, Simon K.
2001-05-01
During neurosurgery, the challenge for the neurosurgeon is to remove as much as possible of a tumor without destroying healthy tissue. This can be difficult because healthy and diseased tissue can have the same visual appearance. To this aim, and because the surgeon cannot see underneath the brain surface, image-guided neurosurgery systems are being increasingly used. However, during surgery, deformation of the brain occurs (due to brain shift and tumor resection), therefore causing errors in the surgical planning with respect to preoperative imaging. In our previous work, we developed software for capturing the deformation of the brain during neurosurgery. The software also allows preoperative data to be updated according to the intraoperative imaging so as to reflect the shape changes of the brain during surgery. Our goal in this paper was to rapidly visualize and characterize this deformation over the course of surgery with appropriate tools. Therefore, we developed tools allowing the doctor to visualize (in 2D and 3D) deformations, as well as the stress tensors characterizing the deformation along with the updated preoperative and intraoperative imaging during the course of surgery. Such tools significantly add to the value of intraoperative imaging and hence could improve surgical outcomes.
Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.
2016-01-01
We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-01-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities. PMID:27538478
NASA Astrophysics Data System (ADS)
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-08-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities.
Visualization index for image-enabled medical records
NASA Astrophysics Data System (ADS)
Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo
2011-03-01
With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos
2012-06-01
When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Imaging the square of the correlated two-electron wave function of a hydrogen molecule
Waitz, M.; Bello, R. Y.; Metz, D.; ...
2017-12-22
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less
Imaging the square of the correlated two-electron wave function of a hydrogen molecule.
Waitz, M; Bello, R Y; Metz, D; Lower, J; Trinter, F; Schober, C; Keiling, M; Lenz, U; Pitzer, M; Mertens, K; Martins, M; Viefhaus, J; Klumpp, S; Weber, T; Schmidt, L Ph H; Williams, J B; Schöffler, M S; Serov, V V; Kheifets, A S; Argenti, L; Palacios, A; Martín, F; Jahnke, T; Dörner, R
2017-12-22
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in which electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.
Cross-Domain Shoe Retrieval with a Semantic Hierarchy of Attribute Classification Network.
Zhan, Huijing; Shi, Boxin; Kot, Alex C
2017-08-04
Cross-domain shoe image retrieval is a challenging problem, because the query photo from the street domain (daily life scenario) and the reference photo in the online domain (online shop images) have significant visual differences due to the viewpoint and scale variation, self-occlusion, and cluttered background. This paper proposes the Semantic Hierarchy Of attributE Convolutional Neural Network (SHOE-CNN) with a three-level feature representation for discriminative shoe feature expression and efficient retrieval. The SHOE-CNN with its newly designed loss function systematically merges semantic attributes of closer visual appearances to prevent shoe images with the obvious visual differences being confused with each other; the features extracted from image, region, and part levels effectively match the shoe images across different domains. We collect a large-scale shoe dataset composed of 14341 street domain and 12652 corresponding online domain images with fine-grained attributes to train our network and evaluate our system. The top-20 retrieval accuracy improves significantly over the solution with the pre-trained CNN features.
Imaging the square of the correlated two-electron wave function of a hydrogen molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waitz, M.; Bello, R. Y.; Metz, D.
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less
A Markov chain model for image ranking system in social networks
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Toriu, Takashi; Hama, Hiromitsu
2014-03-01
In today world, different kinds of networks such as social, technological, business and etc. exist. All of the networks are similar in terms of distributions, continuously growing and expanding in large scale. Among them, many social networks such as Facebook, Twitter, Flickr and many others provides a powerful abstraction of the structure and dynamics of diverse kinds of inter personal connection and interaction. Generally, the social network contents are created and consumed by the influences of all different social navigation paths that lead to the contents. Therefore, identifying important and user relevant refined structures such as visual information or communities become major factors in modern decision making world. Moreover, the traditional method of information ranking systems cannot be successful due to their lack of taking into account the properties of navigation paths driven by social connections. In this paper, we propose a novel image ranking system in social networks by using the social data relational graphs from social media platform jointly with visual data to improve the relevance between returned images and user intentions (i.e., social relevance). Specifically, we propose a Markov chain based Social-Visual Ranking algorithm by taking social relevance into account. By using some extensive experiments, we demonstrated the significant and effectiveness of the proposed social-visual ranking method.
NASA Technical Reports Server (NTRS)
Garbeff, Theodore J., II; Baerny, Jennifer K.
2017-01-01
The following details recent efforts undertaken at the NASA Ames Unitary Plan wind tunnels to design and deploy an advanced, production-level infrared (IR) flow visualization data system. Highly sensitive IR cameras, coupled with in-line image processing, have enabled the visualization of wind tunnel model surface flow features as they develop in real-time. Boundary layer transition, shock impingement, junction flow, vortex dynamics, and buffet are routinely observed in both transonic and supersonic flow regimes all without the need of dedicated ramps in test section total temperature. Successful measurements have been performed on wing-body sting mounted test articles, semi-span floor mounted aircraft models, and sting mounted launch vehicle configurations. The unique requirements of imaging in production wind tunnel testing has led to advancements in the deployment of advanced IR cameras in a harsh test environment, robust data acquisition storage and workflow, real-time image processing algorithms, and evaluation of optimal surface treatments. The addition of a multi-camera IR flow visualization data system to the Ames UPWT has demonstrated itself to be a valuable analyses tool in the study of new and old aircraft/launch vehicle aerodynamics and has provided new insight for the evaluation of computational techniques.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
Methods for Dichoptic Stimulus Presentation in Functional Magnetic Resonance Imaging - A Review
Choubey, Bhaskar; Jurcoane, Alina; Muckli, Lars; Sireteanu, Ruxandra
2009-01-01
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles. PMID:19526076
Background: Preflight Screening, In-flight Capabilities, and Postflight Testing
NASA Technical Reports Server (NTRS)
Gibson, Charles Robert; Duncan, James
2009-01-01
Recommendations for minimal in-flight capabilities: Retinal Imaging - provide in-flight capability for the visual monitoring of ocular health (specifically, imaging of the retina and optic nerve head) with the capability of downlinking video/still images. Tonometry - provide more accurate and reliable in-flight capability for measuring intraocular pressure. Ultrasound - explore capabilities of current on-board system for monitoring ocular health. We currently have limited in-flight capabilities on board the International Space Station for performing an internal ocular health assessment. Visual Acuity, Direct Ophthalmoscope, Ultrasound, Tonometry(Tonopen):
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging
NASA Astrophysics Data System (ADS)
Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi
2018-02-01
We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.
Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J
2015-07-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
Kim, Kyung Lock; Sung, Gihyun; Sim, Jaehwan; Murray, James; Li, Meng; Lee, Ara; Shrinidhi, Annadka; Park, Kyeng Min; Kim, Kimoon
2018-04-27
Here we report ultrastable synthetic binding pairs between cucurbit[7]uril (CB[7]) and adamantyl- (AdA) or ferrocenyl-ammonium (FcA) as a supramolecular latching system for protein imaging, overcoming the limitations of protein-based binding pairs. Cyanine 3-conjugated CB[7] (Cy3-CB[7]) can visualize AdA- or FcA-labeled proteins to provide clear fluorescence images for accurate and precise analysis of proteins. Furthermore, controllability of the system is demonstrated by treating with a stronger competitor guest. At low temperature, this allows us to selectively detach Cy3-CB[7] from guest-labeled proteins on the cell surface, while leaving Cy3-CB[7] latched to the cytosolic proteins for spatially conditional visualization of target proteins. This work represents a non-protein-based bioimaging tool which has inherent advantages over the widely used protein-based techniques, thereby demonstrating the great potential of this synthetic system.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Phased array performance evaluation with photoelastic visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginzel, Robert; Dao, Gavin
2014-02-18
New instrumentation and a widening range of phased array transducer options are affording the industry a greater potential. Visualization of the complex wave components using the photoelastic system can greatly enhance understanding of the generated signals. Diffraction, mode conversion and wave front interaction, together with beam forming for linear, sectorial and matrix arrays, will be viewed using the photoelastic system. Beam focus and steering performance will be shown with a range of embedded and surface targets within glass samples. This paper will present principles and sound field images using this visualization system.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu
2014-04-23
How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.
2014-01-01
Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
NASA Astrophysics Data System (ADS)
Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.
1997-05-01
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y
2008-01-01
Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Redundancy of stereoscopic images: Experimental evaluation
NASA Astrophysics Data System (ADS)
Yaroslavsky, L. P.; Campos, J.; Espínola, M.; Ideses, I.
2005-12-01
With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measurement in anaglyphs and stereograms as functions of the blur degree of one of two stereo images. In addition, we tested the color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations was maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception.
Neutron radiographic viewing system
NASA Technical Reports Server (NTRS)
1972-01-01
The design, development and application of a neutron radiographic viewing system for use in nondestructive testing applications is considered. The system consists of a SEC vidicon camera, neutron image intensifier system, disc recorder, and TV readout. Neutron bombardment of the subject is recorded by an image converter and passed through an optical system into the SEC vidicon. The vidicon output may be stored, or processed for visual readout.
Neuromorphic VLSI vision system for real-time texture segregation.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2008-10-01
The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.
Building conservation base on assessment of facade quality on Basuki Rachmat Street, Malang
NASA Astrophysics Data System (ADS)
Kurniawan, E. B.; Putri, R. Y. A.; Wardhani, D. K.
2017-06-01
Visual quality covers aspects of imageability which is associated with visual system and the element of distinction. Within a visual system of specific area, the physical quality may lead to a strong image. Here, the physical quality is one of important that make urban aesthetic. Build a discussion toward visual system of urban area, this paper aim is to identify the influencing factors in defining the façade’s visual quality of heritage buildings at Jend. Basuki Rahmat Street, Malang City, East Java-Indonesia. This Street is a main road of Malang city center that was built by Dutch colonial government. It was designed by IR. Thomas Kartsten. It was known as one of Malang area that have good visual quality. In order to idenfity the influencing factors, this paper conducts Multiple linear regression as a tools of analysis. The examined potential factors are resulted from of architecture and urban design expert’s assessment to each building’s segment at Jend. Basuki Rahmat. Finally, this paper reveals that the influencing factors are color, rhythm, and proportion. This is demonstrated by the results model: Visual quality (Y) = 0.304 + 0.21 Colors(X5) + 0.221 rhythm (X6) + 0.304 proportion (X7). Furthermore, the recommendation of the building facade will be made based on this model and study of historical and typology building in Basuki Rachmat Street.
A novel drill design for photoacoustic guided surgeries
NASA Astrophysics Data System (ADS)
Shubert, Joshua; Lediju Bell, Muyinatu A.
2018-02-01
Fluoroscopy is currently the standard approach for image guidance of surgical drilling procedures. In addition to the harmful radiation dose to the patient and surgeon, fluoroscopy fails to visualize critical structures such as blood vessels and nerves within the drill path. Photoacoustic imaging is a well-suited imaging method to visualize these structures and it does not require harmful ionizing radiation. However, there is currently no clinical system available to deliver light to occluded drill bit tips. To address this challenge, a prototype drill was designed, built, and tested using an internal light delivery system that allows laser energy to be transferred from a stationary laser source to the tip of a spinning drill bit. Photoacoustic images were successfully obtained with the drill bit submerged in water and with the drill tip inserted into a thoracic vertebra from a human cadaver.
Spherical visual system for real-time virtual reality and surveillance
NASA Astrophysics Data System (ADS)
Chen, Su-Shing
1998-12-01
A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy.
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-29
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-01-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy. PMID:27471000
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
NASA Astrophysics Data System (ADS)
Fujiwara, Yukihiro; Yoshii, Masakazu; Arai, Yasuhito; Adachi, Shuichi
Advanced safety vehicle(ASV)assists drivers’ manipulation to avoid trafic accidents. A variety of researches on automatic driving systems are necessary as an element of ASV. Among them, we focus on visual feedback approach in which the automatic driving system is realized by recognizing road trajectory using image information. The purpose of this paper is to examine the validity of this approach by experiments using a radio-controlled car. First, a practical image processing algorithm to recognize white lines on the road is proposed. Second, a model of the radio-controlled car is built by system identication experiments. Third, an automatic steering control system is designed based on H∞ control theory. Finally, the effectiveness of the designed control system is examined via traveling experiments.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.
Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos
2018-03-25
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO
2018-01-01
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392
Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.
Kline, Keith; Holcombe, Alex O; Eagleman, David M
2004-10-01
In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.
The seam visual tracking method for large structures
NASA Astrophysics Data System (ADS)
Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong
2017-10-01
In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.
Full-view 3D imaging system for functional and anatomical screening of the breast
NASA Astrophysics Data System (ADS)
Oraevsky, Alexander; Su, Richard; Nguyen, Ha; Moore, James; Lou, Yang; Bhadra, Sayantan; Forte, Luca; Anastasio, Mark; Yang, Wei
2018-04-01
Laser Optoacoustic Ultrasonic Imaging System Assembly (LOUISA-3D) was developed in response to demand of diagnostic radiologists for an advanced screening system for the breast to improve on low sensitivity of x-ray based modalities of mammography and tomosynthesis in the dense and heterogeneous breast and low specificity magnetic resonance imaging. It is our working hypothesis that co-registration of quantitatively accurate functional images of the breast vasculature and microvasculature, and anatomical images of breast morphological structures will provide a clinically viable solution for the breast cancer care. Functional imaging is LOUISA-3D is enabled by the full view 3D optoacoustic images acquired at two rapidly toggling laser wavelengths in the near-infrared spectral range. 3D images of the breast anatomical background is enabled in LOUISA-3D by a sequence of B-mode ultrasound slices acquired with a transducer array rotating around the breast. This creates the possibility to visualize distributions of the total hemoglobin and blood oxygen saturation within specific morphological structures such as tumor angiogenesis microvasculature and larger vasculature in proximity of the tumor. The system has four major components: (i) a pulsed dual wavelength laser with fiberoptic light delivery system, (ii) an imaging module with two arc shaped probes (optoacoustic and ultrasonic) placed in a transparent bowl that rotates around the breast, (iii) a multichannel electronic system with analog preamplifiers and digital data acquisition boards, and (iv) computer for the system control, data processing and image reconstruction. The most important advancement of this latest system design compared with previously reported systems is the full breast illumination accomplished for each rotational step of the optoacoustic transducer array using fiberoptic illuminator rotating around the breast independently from rotation of the detector probe. We report here a pilot case studies on one healthy volunteer and on patient with a suspicious small lesion in the breast. LOUISA3D visualized deoxygenated veins and oxygenated arteries of a healthy volunteer, indicative of its capability to visualize hypoxic microvasculature in cancerous tumors. A small lesion detected on optoacoustic image of a patient was not visible on ultrasound, potentially indicating high system sensitivity of the optoacoustic subsystem to small but aggressively growing cancerous lesions with high density angiogenesis microvasculature. The main breast vasculature (0.5-1 mm) was visible at depth of up to 40-mm with 0.3-mm resolution. The results of LOUISA-3D pilot clinical validation demonstrated the system readiness for statistically significant clinical feasibility study.
NASA Technical Reports Server (NTRS)
Meyer, P. J.
1993-01-01
An image data visual browse facility is developed for a UNIX platform using the X Windows 11 system. It allows one to visually examine reduced resolution image data to determine which data are applicable for further research. Links with a relational data base manager then allow one to extract not only the full resolution image data, but any other ancillary data related to the case study. Various techniques are examined for compression of the image data in order to reduce data storage requirements and time necessary to transmit the data on the internet. Data used were from the WetNet project.
Ground-to-air flow visualization using Solar Calcium-K line Background-Oriented Schlieren
NASA Astrophysics Data System (ADS)
Hill, Michael A.; Haering, Edward A.
2017-01-01
The Calcium-K Eclipse Background-Oriented Schlieren experiment was performed as a proof of concept test to evaluate the effectiveness of using the solar disk as a background to perform the Background-Oriented Schlieren (BOS) method of flow visualization. A ground-based imaging system was equipped with a Calcium-K line optical etalon filter to enable the use of the chromosphere of the sun as the irregular background to be used for BOS. A US Air Force T-38 aircraft performed three supersonic runs which eclipsed the sun as viewed from the imaging system. The images were successfully post-processed using optical flow methods to qualitatively reveal the density gradients in the flow around the aircraft.
Prediction suppression and surprise enhancement in monkey inferotemporal cortex.
Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R
2017-07-01
Exposing monkeys, over the course of days and weeks, to pairs of images presented in fixed sequence, so that each leading image becomes a predictor for the corresponding trailing image, affects neuronal visual responsiveness in area TE. At the end of the training period, neurons respond relatively weakly to a trailing image when it appears in a trained sequence and, thus, confirms prediction, whereas they respond relatively strongly to the same image when it appears in an untrained sequence and, thus, violates prediction. This effect could arise from prediction suppression (reduced firing in response to the occurrence of a probable event) or surprise enhancement (elevated firing in response to the omission of a probable event). To identify its cause, we compared firing under the prediction-confirming and prediction-violating conditions to firing under a prediction-neutral condition. The results provide strong evidence for prediction suppression and limited evidence for surprise enhancement. NEW & NOTEWORTHY In predictive coding models of the visual system, neurons carry signed prediction error signals. We show here that monkey inferotemporal neurons exhibit prediction-modulated firing, as posited by these models, but that the signal is unsigned. The response to a prediction-confirming image is suppressed, and the response to a prediction-violating image may be enhanced. These results are better explained by a model in which the visual system emphasizes unpredicted events than by a predictive coding model. Copyright © 2017 the American Physiological Society.
20 kHz toluene planar laser-induced fluorescence imaging of a jet in nearly sonic crossflow
NASA Astrophysics Data System (ADS)
Miller, V. A.; Troutman, V. A.; Mungal, M. G.; Hanson, R. K.
2014-10-01
This manuscript describes continuous, high-repetition-rate (20 kHz) toluene planar laser-induced fluorescence (PLIF) imaging in an expansion tube impulse flow facility. Cinematographic image sequences are acquired that visualize an underexpanded jet of hydrogen in Mach 0.9 crossflow, a practical flow configuration relevant to aerospace propulsion systems. The freestream gas is nitrogen seeded with toluene; toluene broadly absorbs and fluoresces in the ultraviolet, and the relatively high quantum yield of toluene produces large signals and high signal-to-noise ratios. Toluene is excited using a commercially available, frequency-quadrupled (266 nm), high-repetition-rate (20 kHz), pulsed (0.8-0.9 mJ per pulse), diode-pumped solid-state Nd:YAG laser, and fluorescence is imaged with a high-repetition-rate intensifier and CMOS camera. The resulting PLIF movie and image sequences are presented, visualizing the jet start-up process and the dynamics of the jet in crossflow; the freestream duration and a measure of freestream momentum flux steadiness are also inferred. This work demonstrates progress toward continuous PLIF imaging of practical flow systems in impulse facilities at kHz acquisition rates using practical, turn-key, high-speed laser and imaging systems.
A systematic review of visual image theory, assessment, and use in skin cancer and tanning research.
McWhirter, Jennifer E; Hoffman-Goetz, Laurie
2014-01-01
Visual images increase attention, comprehension, and recall of health information and influence health behaviors. Health communication campaigns on skin cancer and tanning often use visual images, but little is known about how such images are selected or evaluated. A systematic review of peer-reviewed, published literature on skin cancer and tanning was conducted to determine (a) what visual communication theories were used, (b) how visual images were evaluated, and (c) how visual images were used in the research studies. Seven databases were searched (PubMed/MEDLINE, EMBASE, PsycINFO, Sociological Abstracts, Social Sciences Full Text, ERIC, and ABI/INFORM) resulting in 5,330 citations. Of those, 47 met the inclusion criteria. Only one study specifically identified a visual communication theory guiding the research. No standard instruments for assessing visual images were reported. Most studies lacked, to varying degrees, comprehensive image description, image pretesting, full reporting of image source details, adequate explanation of image selection or development, and example images. The results highlight the need for greater theoretical and methodological attention to visual images in health communication research in the future. To this end, the authors propose a working definition of visual health communication.
Facing the Limitations of Electronic Document Handling.
ERIC Educational Resources Information Center
Moralee, Dennis
1985-01-01
This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)
MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS
This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.
Hortolà, Policarp
2010-10-01
Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
[Basic concept in computer assisted surgery].
Merloz, Philippe; Wu, Hao
2006-03-01
To investigate application of medical digital imaging systems and computer technologies in orthopedics. The main computer-assisted surgery systems comprise the four following subcategories. (1) A collection and recording process for digital data on each patient, including preoperative images (CT scans, MRI, standard X-rays), intraoperative visualization (fluoroscopy, ultrasound), and intraoperative position and orientation of surgical instruments or bone sections (using 3D localises). Data merging based on the matching of preoperative imaging (CT scans, MRI, standard X-rays) and intraoperative visualization (anatomical landmarks, or bone surfaces digitized intraoperatively via 3D localiser; intraoperative ultrasound images processed for delineation of bone contours). (2) In cases where only intraoperative images are used for computer-assisted surgical navigation, the calibration of the intraoperative imaging system replaces the merged data system, which is then no longer necessary. (3) A system that provides aid in decision-making, so that the surgical approach is planned on basis of multimodal information: the interactive positioning of surgical instruments or bone sections transmitted via pre- or intraoperative images, display of elements to guide surgical navigation (direction, axis, orientation, length and diameter of a surgical instrument, impingement, etc. ). And (4) A system that monitors the surgical procedure, thereby ensuring that the optimal strategy defined at the preoperative stage is taken into account. It is possible that computer-assisted orthopedic surgery systems will enable surgeons to better assess the accuracy and reliability of the various operative techniques, an indispensable stage in the optimization of surgery.
PACS-based interface for 3D anatomical structure visualization and surgical planning
NASA Astrophysics Data System (ADS)
Koehl, Christophe; Soler, Luc; Marescaux, Jacques
2002-05-01
The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.
Interobject grouping facilitates visual awareness.
Stein, Timo; Kaiser, Daniel; Peelen, Marius V
2015-01-01
In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.
Multispectral THz-VIS passive imaging system for hidden threats visualization
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw
2013-10-01
Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
Fuzzy Logic-based expert system for evaluating cake quality of freeze-dried formulations.
Trnka, Hjalte; Wu, Jian X; Van De Weert, Marco; Grohganz, Holger; Rantanen, Jukka
2013-12-01
Freeze-drying of peptide and protein-based pharmaceuticals is an increasingly important field of research. The diverse nature of these compounds, limited understanding of excipient functionality, and difficult-to-analyze quality attributes together with the increasing importance of the biosimilarity concept complicate the development phase of safe and cost-effective drug products. To streamline the development phase and to make high-throughput formulation screening possible, efficient solutions for analyzing critical quality attributes such as cake quality with minimal material consumption are needed. The aim of this study was to develop a fuzzy logic system based on image analysis (IA) for analyzing cake quality. Freeze-dried samples with different visual quality attributes were prepared in well plates. Imaging solutions together with image analytical routines were developed for extracting critical visual features such as the degree of cake collapse, glassiness, and color uniformity. On the basis of the IA outputs, a fuzzy logic system for analysis of these freeze-dried cakes was constructed. After this development phase, the system was tested with a new screening well plate. The developed fuzzy logic-based system was found to give comparable quality scores with visual evaluation, making high-throughput classification of cake quality possible. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
TU-A-201-01: Introduction to In-Room Imaging System Characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.
2016-06-15
Recent years have seen a widespread proliferation of available in-room image guidance systems for radiation therapy target localization with many centers having multiple in-room options. In this session, available imaging systems for in-room IGRT will be reviewed highlighting the main differences in workflow efficiency, targeting accuracy and image quality as it relates to target visualization. Decision-making strategies for integrating these tools into clinical image guidance protocols that are tailored to specific disease sites like H&N, lung, pelvis, and spine SBRT will be discussed. Learning Objectives: Major system characteristics of a wide range of available in-room imaging systems for IGRT. Advantagesmore » / disadvantages of different systems for site-specific IGRT considerations. Concepts of targeting accuracy and time efficiency in designing clinical imaging protocols.« less
Navigation and Image Injection for Control of Bone Removal and Osteotomy Planes in Spine Surgery.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven Rainer; Archavlis, Elefterios; Giese, Alf
2017-04-01
In contrast to cranial interventions, neuronavigation in spinal surgery is used in few applications, not tapping into its full technological potential. We have developed a method to preoperatively create virtual resection planes and volumes for spinal osteotomies and export 3-D operation plans to a navigation system controlling intraoperative visualization using a surgical microscope's head-up display. The method was developed using a Sawbone ® model of the lumbar spine, demonstrating feasibility with high precision. Computer tomographic and magnetic resonance image data were imported into Amira ® , a 3-D visualization software. Resection planes were positioned, and resection volumes representing intraoperative bone removal were defined. Fused to the original Digital Imaging and Communications in Medicine data, the osteotomy planes were exported to the cranial version of a Brainlab ® navigation system. A navigated surgical microscope with video connection to the navigation system allowed intraoperative image injection to visualize the preplanned resection planes. The workflow was applied to a patient presenting with a congenital hemivertebra of the thoracolumbar spine. Dorsal instrumentation with pedicle screws and rods was followed by resection of the deformed vertebra guided by the in-view image injection of the preplanned resection planes into the optical path of a surgical microscope. Postoperatively, the patient showed no neurological deficits, and the spine was found to be restored in near physiological posture. The intraoperative visualization of resection planes in a microscope's head-up display was found to assist the surgeon during the resection of a complex-shaped bone wedge and may help to further increase accuracy and patient safety. Copyright © 2017 by the Congress of Neurological Surgeons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuzaki, Y; Jenkins, C; Yang, Y
Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light ismore » emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R{sup 2}=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.« less
Visual recognition system of cherry picking robot based on Lab color model
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Zuo, Jianjun; Yu, Tingzhong; Wang, Yan
2017-12-01
This paper designs a visual recognition system suitable for cherry picking. First, the system deals with the image using the vector median filter. And then it extracts a channel of Lab color model to divide the cherries and the background. The cherry contour was successfully fitted by the least square method, and the centroid and radius of the cherry were extracted. Finally, the cherry was successfully extracted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
VirGO: A Visual Browser for the ESO Science Archive Facility
NASA Astrophysics Data System (ADS)
Chéreau, F.
2008-08-01
VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system. The main website for VirGO is at http://archive.eso.org/cms/virgo.
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.
Research on flight stability performance of rotor aircraft based on visual servo control method
NASA Astrophysics Data System (ADS)
Yu, Yanan; Chen, Jing
2016-11-01
control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.
Interactive visualization tools for the structural biologist.
Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M
2013-10-01
In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.
Design of a reading test for low-vision image warping
NASA Astrophysics Data System (ADS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane
1993-08-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
Design of a reading test for low vision image warping
NASA Technical Reports Server (NTRS)
Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.
1993-01-01
NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.
A method for automatically abstracting visual documents
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1994-01-01
Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.
Vergara, Gaston R; Vijayakumar, Sathya; Kholmovski, Eugene G; Blauer, Joshua J E; Guttman, Mike A; Gloschat, Christopher; Payne, Gene; Vij, Kamal; Akoum, Nazem W; Daccarett, Marcos; McGann, Christopher J; Macleod, Rob S; Marrouche, Nassir F
2011-02-01
Magnetic resonance imaging (MRI) allows visualization of location and extent of radiofrequency (RF) ablation lesion, myocardial scar formation, and real-time (RT) assessment of lesion formation. In this study, we report a novel 3-Tesla RT -RI based porcine RF ablation model and visualization of lesion formation in the atrium during RF energy delivery. The purpose of this study was to develop a 3-Tesla RT MRI-based catheter ablation and lesion visualization system. RF energy was delivered to six pigs under RT MRI guidance. A novel MRI-compatible mapping and ablation catheter was used. Under RT MRI, this catheter was safely guided and positioned within either the left or right atrium. Unipolar and bipolar electrograms were recorded. The catheter tip-tissue interface was visualized with a T1-weighted gradient echo sequence. RF energy was then delivered in a power-controlled fashion. Myocardial changes and lesion formation were visualized with a T2-weighted (T2W) half Fourier acquisition with single-shot turbo spin echo (HASTE) sequence during ablation. RT visualization of lesion formation was achieved in 30% of the ablations performed. In the other cases, either the lesion was formed outside the imaged region (25%) or the lesion was not created (45%) presumably due to poor tissue-catheter tip contact. The presence of lesions was confirmed by late gadolinium enhancement MRI and macroscopic tissue examination. MRI-compatible catheters can be navigated and RF energy safely delivered under 3-Tesla RT MRI guidance. Recording electrograms during RT imaging also is feasible. RT visualization of lesion as it forms during RF energy delivery is possible and was demonstrated using T2W HASTE imaging. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R
2015-01-01
Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Walking modulates speed sensitivity in Drosophila motion vision.
Chiappe, M Eugenia; Seelig, Johannes D; Reiser, Michael B; Jayaraman, Vivek
2010-08-24
Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization. Copyright 2010 Elsevier Ltd. All rights reserved.
Suzurikawa, Jun; Tani, Toshiki; Nakao, Masayuki; Tanaka, Shigeru; Takahashi, Hirokazu
2009-12-01
Recently, intrinsic signal optical imaging has been widely used as a routine procedure for visualizing cortical functional maps. We do not, however, have a well-established imaging method for visualizing cortical functional connectivity indicating spatio-temporal patterns of activity propagation in the cerebral cortex. In the present study, we developed a novel experimental setup for investigating the propagation of neural activities combining the intracortical microstimulation (ICMS) technique with voltage sensitive dye (VSD) imaging, and demonstrated the feasibility of this setup applying to the measurement of time-dependent intra- and inter-hemispheric spread of ICMS-evoked excitation in the cat visual cortices, areas 17 and 18. A microelectrode array for the ICMS was inserted with a specially designed easy-to-detach electrode holder around the 17/18 transition zones (TZs), where the left and right hemispheres were interconnected via the corpus callosum. The microelectrode array was stably anchored in agarose without any holder, which enabled us to visualize evoked activities even in the vicinity of penetration sites as well as in a wide recording region that covered a part of both hemispheres. The VSD imaging could successfully visualize ICMS-evoked excitation and subsequent propagation in the visual cortices contralateral as well as ipsilateral to the ICMS. Using the orientation maps as positional references, we showed that the activity propagation patterns were consistent with previously reported anatomical patterns of intracortical and interhemispheric connections. This finding indicates that our experimental system can serve for the investigation of cortical functional connectivity.
Deib, Gerard; Johnson, Alex; Unberath, Mathias; Yu, Kevin; Andress, Sebastian; Qian, Long; Osgood, Gregory; Navab, Nassir; Hui, Ferdinand; Gailloud, Philippe
2018-05-30
Optical see-through head mounted displays (OST-HMDs) offer a mixed reality (MixR) experience with unhindered procedural site visualization during procedures using high resolution radiographic imaging. This technical note describes our preliminary experience with percutaneous spine procedures utilizing OST-HMD as an alternative to traditional angiography suite monitors. MixR visualization was achieved using the Microsoft HoloLens system. Various spine procedures (vertebroplasty, kyphoplasty, and percutaneous discectomy) were performed on a lumbar spine phantom with commercially available devices. The HMD created a real time MixR environment by superimposing virtual posteroanterior and lateral views onto the interventionalist's field of view. The procedures were filmed from the operator's perspective. Videos were reviewed to assess whether key anatomic landmarks and materials were reliably visualized. Dosimetry and procedural times were recorded. The operator completed a questionnaire following each procedure, detailing benefits, limitations, and visualization mode preferences. Percutaneous vertebroplasty, kyphoplasty, and discectomy procedures were successfully performed using OST-HMD image guidance on a lumbar spine phantom. Dosimetry and procedural time compared favorably with typical procedural times. Conventional and MixR visualization modes were equally effective in providing image guidance, with key anatomic landmarks and materials reliably visualized. This preliminary study demonstrates the feasibility of utilizing OST-HMDs for image guidance in interventional spine procedures. This novel visualization approach may serve as a valuable adjunct tool during minimally invasive percutaneous spine treatment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm
NASA Astrophysics Data System (ADS)
Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.
2015-03-01
High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
In situ real-time imaging of self-sorted supramolecular nanofibres
NASA Astrophysics Data System (ADS)
Onogi, Shoji; Shigemitsu, Hajime; Yoshii, Tatsuyuki; Tanida, Tatsuya; Ikeda, Masato; Kubota, Ryou; Hamachi, Itaru
2016-08-01
Self-sorted supramolecular nanofibres—a multicomponent system that consists of several types of fibre, each composed of distinct building units—play a crucial role in complex, well-organized systems with sophisticated functions, such as living cells. Designing and controlling self-sorting events in synthetic materials and understanding their structures and dynamics in detail are important elements in developing functional artificial systems. Here, we describe the in situ real-time imaging of self-sorted supramolecular nanofibre hydrogels consisting of a peptide gelator and an amphiphilic phosphate. The use of appropriate fluorescent probes enabled the visualization of self-sorted fibres entangled in two and three dimensions through confocal laser scanning microscopy and super-resolution imaging, with 80 nm resolution. In situ time-lapse imaging showed that the two types of fibre have different formation rates and that their respective physicochemical properties remain intact in the gel. Moreover, we directly visualized stochastic non-synchronous fibre formation and observed a cooperative mechanism.
Image manipulation software portable on different hardware platforms: what is the cost?
NASA Astrophysics Data System (ADS)
Ligier, Yves; Ratib, Osman M.; Funk, Matthieu; Perrier, Rene; Girard, Christian; Logean, Marianne
1992-07-01
A hospital wide PACS project is currently under development at the University Hospital of Geneva. The visualization and manipulation of images provided by different imaging modalities constitutes one of the most challenging components of a PACS. Because there are different requirements depending on the clinical usage, it was necessary for such a visualization software to be provided on different types of workstations in different sectors of the PACS. The user interface has to be the same independently of the underlying workstation. Beside, in addition to a standard set of image manipulation and processing tools there is a need for more specific clinical tools that should be easily adapted to specific medical requirements. To achieve operating and windowing systems: the standard Unix/X-11/OSF-Motif based workstations and the Macintosh family and should be easily ported on other systems. This paper describes the design of such a system and discusses the extra cost and efforts involved in the development of a portable and easily expandable software.
NASA Astrophysics Data System (ADS)
Cohen, Noam; Schejter, Adi; Farah, Nairouz; Shoham, Shy
2016-03-01
Studying the responses of retinal ganglion cell (RGC) populations has major significance in vision research. Multiphoton imaging of optogenetic probes has recently become the leading approach for visualizing neural populations and has specific advantages for imaging retinal activity during visual stimulation, because it leads to reduced direct photoreceptor excitation. However, multiphoton retinal activity imaging is not straightforward: point-by-point scanning leads to repeated neural excitation while optical access through the rodent eye in vivo has proven highly challenging. Here, we present two enabling optical designs for multiphoton imaging of responses to visual stimuli in mouse retinas expressing calcium indicators. First, we present an imaging solution based on Scanning Line Temporal Focusing (SLITE) for rapidly imaging neuronal activity in vitro. In this design, we scan a temporally focused line rather than a point, increasing the scan speed and reducing the impact of repeated excitation, while maintaining high optical sectioning. Second, we present the first in vivo demonstration of two-photon imaging of RGC activity in the mouse retina. To obtain these cellular resolution recordings we integrated an illumination path into a correction-free imaging system designed using an optical model of the mouse eye. This system can image at multiple depths using an electronically tunable lens integrated into its optical path. The new optical designs presented here overcome a number of outstanding obstacles, allowing the study of rapid calcium- and potentially even voltage-indicator signals both in vitro and in vivo, thereby bringing us a step closer toward distributed monitoring of action potentials.
Visual White Matter Integrity in Schizophrenia
Butler, Pamela D.; Hoptman, Matthew J.; Nierenberg, Jay; Foxe, John J.; Javitt, Daniel C.; Lim, Kelvin O.
2007-01-01
Objective Patients with schizophrenia have visual-processing deficits. This study examines visual white matter integrity as a potential mechanism for these deficits. Method Diffusion tensor imaging was used to examine white matter integrity at four levels of the visual system in 17 patients with schizophrenia and 21 comparison subjects. The levels examined were the optic radiations, the striate cortex, the inferior parietal lobule, and the fusiform gyrus. Results Schizophrenia patients showed a significant decrease in fractional anisotropy in the optic radiations but not in any other region. Conclusions This finding indicates that white matter integrity is more impaired at initial input, rather than at higher levels of the visual system, and supports the hypothesis that visual-processing deficits occur at the early stages of processing. PMID:17074957
Ray-based approach to integrated 3D visual communication
NASA Astrophysics Data System (ADS)
Naemura, Takeshi; Harashima, Hiroshi
2001-02-01
For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
3D visualization and stereographic techniques for medical research and education.
Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F
2001-01-01
While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a flickering effect, which is strenuous for prolonged use. However, all of the above have one or more drawbacks such as high costs, poor quality and localized use. A fifth system, recently released by Barco Systems, modifies the CrystalEyes system by projecting two superimposed images, using polarized light, with the wave plane of the left image at right angle to that of the right image. By using polarized glasses, each eye will see the appropriate image and true stereographic vision is achieved. While the system requires very expensive hardware, it solves some of the more important problems mentioned above, such as the capacity to use higher frame rates and the ability to display images to a large audience. Mednet has instigated a research project which uses reconstructed models from the central nervous system (human brain and basal ganglia, cortex, dendrites and dendritic spines) and peripheral nervous system (nodes of Ranvier and axoplasmic areas). The aim is to modify the models to fit the different visualization techniques mentioned above and compare a group of users perceived degree of 3D for each technique.
System for objective assessment of image differences in digital cinema
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2014-09-01
There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Visualizing planetary data by using 3D engines
NASA Astrophysics Data System (ADS)
Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.
2017-09-01
We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.
An Updated Account of the WISELAV Project: A Visual Construction of the English Verb System
ERIC Educational Resources Information Center
Pablos, Andrés Palacios
2016-01-01
This article presents the state of the art in WISELAV, an on-going research project based on the metaphor Languages Are (like) Visuals (LAV) and its mapping Words-In-Shapes Exchange (WISE). First, the cognitive premises that motivate the proposal are recalled: the power of images, students' increasingly visual cognitive learning style, and the…
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.
Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong
2016-08-01
The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.
Mari, Jean Martial; West, Simeon J.; Pratt, Rosalind; David, Anna L.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.
2016-01-01
Precise device guidance is important for interventional procedures in many different clinical fields including fetal medicine, regional anesthesia, interventional pain management, and interventional oncology. While ultrasound is widely used in clinical practice for real-time guidance, the image contrast that it provides can be insufficient for visualizing tissue structures such as blood vessels, nerves, and tumors. This study was centered on the development of a photoacoustic imaging system for interventional procedures that delivered excitation light in the ranges of 750 to 900 nm and 1150 to 1300 nm, with an optical fiber positioned in a needle cannula. Coregistered B-mode ultrasound images were obtained. The system, which was based on a commercial ultrasound imaging scanner, has an axial resolution in the vicinity of 100 μm and a submillimeter, depth-dependent lateral resolution. Using a tissue phantom and 800 nm excitation light, a simulated blood vessel could be visualized at a maximum distance of 15 mm from the needle tip. Spectroscopic contrast for hemoglobin and lipids was observed with ex vivo tissue samples, with photoacoustic signal maxima consistent with the respective optical absorption spectra. The potential for further optimization of the system is discussed. PMID:26263417
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Content-Based Medical Image Retrieval
NASA Astrophysics Data System (ADS)
Müller, Henning; Deserno, Thomas M.
This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.
NASA Astrophysics Data System (ADS)
Takenaka, N.; Kadowaki, T.; Kawabata, Y.; Lim, I. C.; Sim, C. M.
2005-04-01
Visualization of cavitation phenomena in a Diesel engine fuel injection nozzle was carried out by using neutron radiography system at KUR in Research Reactor Institute in Kyoto University and at HANARO in Korea Atomic Energy Research Institute. A neutron chopper was synchronized to the engine rotation for high shutter speed exposures. A multi-exposure method was applied to obtain a clear image as an ensemble average of the synchronized images. Some images were successfully obtained and suggested new understanding of the cavitation phenomena in a Diesel engine fuel injection nozzle.
76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...
Universal and adapted vocabularies for generic visual categorization.
Perronnin, Florent
2008-07-01
Generic Visual Categorization (GVC) is the pattern classification problem which consists in assigning labels to an image based on its semantic content. This is a challenging task as one has to deal with inherent object/scene variations as well as changes in viewpoint, lighting and occlusion. Several state-of-the-art GVC systems use a vocabulary of visual terms to characterize images with a histogram of visual word counts. We propose a novel practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. The main novelty is that an image is characterized by a set of histograms - one per class - where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. This framework is applied to two types of local image features: low-level descriptors such as the popular SIFT and high-level histograms of word co-occurrences in a spatial neighborhood. It is shown experimentally on two challenging datasets (an in-house database of 19 categories and the PASCAL VOC 2006 dataset) that the proposed approach exhibits state-of-the-art performance at a modest computational cost.
In-Flight Flow Visualization Using Infrared Thermography
NASA Technical Reports Server (NTRS)
vanDam, C. P.; Shiu, H. J.; Banks D. W.
1997-01-01
The feasibility of remote infrared thermography of aircraft surfaces during flight to visualize the extent of laminar flow on a target aircraft has been examined. In general, it was determined that such thermograms can be taken successfully using an existing airplane/thermography system (NASA Dryden's F-18 with infrared imaging pod) and that the transition pattern and, thus, the extent of laminar flow can be extracted from these thermograms. Depending on the in-flight distance between the F-18 and the target aircraft, the thermograms can have a spatial resolution of as little as 0.1 inches. The field of view provided by the present remote system is superior to that of prior stationary infrared thermography systems mounted in the fuselage or vertical tail of a subject aircraft. An additional advantage of the present experimental technique is that the target aircraft requires no or minimal modifications. An image processing procedure was developed which improves the signal-to-noise ratio of the thermograms. Problems encountered during the analog recording of the thermograms (banding of video images) made it impossible to evaluate the adequacy of the present imaging system and image processing procedure to detect transition on untreated metal surfaces. The high reflectance, high thermal difussivity, and low emittance of metal surfaces tend to degrade the images to an extent that it is very difficult to extract transition information from them. The application of a thin (0.005 inches) self-adhesive insulating film to the surface is shown to solve this problem satisfactorily. In addition to the problem of infrared based transition detection on untreated metal surfaces, future flight tests will also concentrate on the visualization of other flow phenomena such as flow separation and reattachment.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
Is visual image segmentation a bottom-up or an interactive process?
Vecera, S P; Farah, M J
1997-11-01
Visual image segmentation is the process by which the visual system groups features that are part of a single shape. Is image segmentation a bottom-up or an interactive process? In Experiments 1 and 2, we presented subjects with two overlapping shapes and asked them to determine whether two probed locations were on the same shape or on different shapes. The availability of top-down support was manipulated by presenting either upright or rotated letters. Subjects were fastest to respond when the shapes corresponded to familiar shapes--the upright letters. In Experiment 3, we used a variant of this segmentation task to rule out the possibility that subjects performed same/different judgments after segmentation and recognition of both letters. Finally, in Experiment 4, we ruled out the possibility that the advantage for upright letters was merely due to faster recognition of upright letters relative to rotated letters. The results suggested that the previous effects were not due to faster recognition of upright letters; stimulus familiarity influenced segmentation per se. The results are discussed in terms of an interactive model of visual image segmentation.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
NASA Astrophysics Data System (ADS)
Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.
2017-11-01
Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.
Solar System Visualization (SSV) Project
NASA Technical Reports Server (NTRS)
Todd, Jessida L.
2005-01-01
The Solar System Visualization (SSV) project aims at enhancing scientific and public understanding through visual representations and modeling procedures. The SSV project's objectives are to (1) create new visualization technologies, (2) organize science observations and models, and (3) visualize science results and mission Plans. The SSV project currently supports the Mars Exploration Rovers (MER) mission, the Mars Reconnaissance Orbiter (MRO), and Cassini. In support of the these missions, the SSV team has produced pan and zoom animations of large mosaics to reveal details of surface features and topography, created 3D animations of science instruments and procedures, formed 3-D anaglyphs from left and right stereo pairs, and animated registered multi-resolution mosaics to provide context for microscopic images.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.
Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo
2016-10-20
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murase, Kenya, E-mail: murase@sahs.med.osaka-u.ac.jp; Song, Ruixiao; Hiratsuka, Samu
We investigated the feasibility of visualizing blood coagulation using a system for magnetic particle imaging (MPI). A magnetic field-free line is generated using two opposing neodymium magnets and transverse images are reconstructed from the third-harmonic signals received by a gradiometer coil, using the maximum likelihood-expectation maximization algorithm. Our MPI system was used to image the blood coagulation induced by adding CaCl{sub 2} to whole sheep blood mixed with magnetic nanoparticles (MNPs). The “MPI value” was defined as the pixel value of the transverse image reconstructed from the third-harmonic signals. MPI values were significantly smaller for coagulated blood samples than thosemore » without coagulation. We confirmed the rationale of these results by calculating the third-harmonic signals for the measured viscosities of samples, with an assumption that the magnetization and particle size distribution of MNPs obey the Langevin equation and log-normal distribution, respectively. We concluded that MPI can be useful for visualizing blood coagulation.« less
Cocaine, Appetitive Memory and Neural Connectivity
Ray, Suchismita
2013-01-01
This review examines existing cognitive experimental and brain imaging research related to cocaine addiction. In section 1, previous studies that have examined cognitive processes, such as implicit and explicit memory processes in cocaine users are reported. Next, in section 2, brain imaging studies are reported that have used chronic users of cocaine as study participants. In section 3, several conclusions are drawn. They are: (a) in cognitive experimental literature, no study has examined both implicit and explicit memory processes involving cocaine related visual information in the same cocaine user, (b) neural mechanisms underlying implicit and explicit memory processes for cocaine-related visual cues have not been directly investigated in cocaine users in the imaging literature, and (c) none of the previous imaging studies has examined connectivity between the memory system and craving system in the brain of chronic users of cocaine. Finally, future directions in the field of cocaine addiction are suggested. PMID:25009766
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
JSC Shuttle Mission Simulator (SMS) visual system payload bay video image
NASA Technical Reports Server (NTRS)
1981-01-01
This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.
Application of hyperspectral imaging for characterization of intramuscular fat distribution in beef
USDA-ARS?s Scientific Manuscript database
In this study, a hyperspectral imaging system in the spectral region of 400–1000 nm was used for visualization and determination of intramuscular fat concentration in beef samples. Hyperspectral images were acquired for beef samples, and spectral information was then extracted from each single sampl...